00:00:00.001 Started by upstream project "autotest-nightly" build number 3917 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3292 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.070 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.105 Using shallow fetch with depth 1 00:00:00.105 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.105 > git --version # timeout=10 00:00:00.135 > git --version # 'git version 2.39.2' 00:00:00.135 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.896 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.906 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.917 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:03.917 > git config core.sparsecheckout # timeout=10 00:00:03.926 > git read-tree -mu HEAD # timeout=10 00:00:03.942 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:03.967 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:03.967 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:04.048 [Pipeline] Start of Pipeline 00:00:04.061 [Pipeline] library 00:00:04.062 Loading library shm_lib@master 00:00:04.063 Library shm_lib@master is cached. Copying from home. 00:00:04.078 [Pipeline] node 00:00:04.090 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:04.091 [Pipeline] { 00:00:04.103 [Pipeline] catchError 00:00:04.105 [Pipeline] { 00:00:04.117 [Pipeline] wrap 00:00:04.125 [Pipeline] { 00:00:04.131 [Pipeline] stage 00:00:04.132 [Pipeline] { (Prologue) 00:00:04.146 [Pipeline] echo 00:00:04.147 Node: VM-host-SM9 00:00:04.151 [Pipeline] cleanWs 00:00:04.159 [WS-CLEANUP] Deleting project workspace... 00:00:04.159 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.166 [WS-CLEANUP] done 00:00:04.320 [Pipeline] setCustomBuildProperty 00:00:04.401 [Pipeline] httpRequest 00:00:04.420 [Pipeline] echo 00:00:04.421 Sorcerer 10.211.164.101 is alive 00:00:04.426 [Pipeline] httpRequest 00:00:04.429 HttpMethod: GET 00:00:04.430 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:04.430 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:04.440 Response Code: HTTP/1.1 200 OK 00:00:04.440 Success: Status code 200 is in the accepted range: 200,404 00:00:04.441 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.514 [Pipeline] sh 00:00:06.791 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.804 [Pipeline] httpRequest 00:00:06.824 [Pipeline] echo 00:00:06.826 Sorcerer 10.211.164.101 is alive 00:00:06.832 [Pipeline] httpRequest 00:00:06.835 HttpMethod: GET 00:00:06.836 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:06.836 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:06.850 Response Code: HTTP/1.1 200 OK 00:00:06.850 Success: Status code 200 is in the accepted range: 200,404 00:00:06.851 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:48.434 [Pipeline] sh 00:00:48.716 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:51.261 [Pipeline] sh 00:00:51.543 + git -C spdk log --oneline -n5 00:00:51.543 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:51.543 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:51.543 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:51.543 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:00:51.543 084afa904 util: copy errno before calling stdlib's functions 00:00:51.561 [Pipeline] writeFile 00:00:51.577 [Pipeline] sh 00:00:51.859 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.871 [Pipeline] sh 00:00:52.152 + cat autorun-spdk.conf 00:00:52.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.152 SPDK_TEST_NVME=1 00:00:52.152 SPDK_TEST_FTL=1 00:00:52.152 SPDK_TEST_ISAL=1 00:00:52.152 SPDK_RUN_ASAN=1 00:00:52.152 SPDK_RUN_UBSAN=1 00:00:52.152 SPDK_TEST_XNVME=1 00:00:52.152 SPDK_TEST_NVME_FDP=1 00:00:52.152 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.160 RUN_NIGHTLY=1 00:00:52.161 [Pipeline] } 00:00:52.178 [Pipeline] // stage 00:00:52.192 [Pipeline] stage 00:00:52.195 [Pipeline] { (Run VM) 00:00:52.208 [Pipeline] sh 00:00:52.489 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.489 + echo 'Start stage prepare_nvme.sh' 00:00:52.489 Start stage prepare_nvme.sh 00:00:52.489 + [[ -n 4 ]] 00:00:52.489 + disk_prefix=ex4 00:00:52.489 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:52.489 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:52.489 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:52.489 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.489 ++ SPDK_TEST_NVME=1 00:00:52.489 ++ SPDK_TEST_FTL=1 00:00:52.489 ++ SPDK_TEST_ISAL=1 00:00:52.489 ++ SPDK_RUN_ASAN=1 00:00:52.489 ++ SPDK_RUN_UBSAN=1 00:00:52.489 ++ SPDK_TEST_XNVME=1 00:00:52.489 ++ SPDK_TEST_NVME_FDP=1 00:00:52.489 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.489 ++ RUN_NIGHTLY=1 00:00:52.489 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:52.489 + nvme_files=() 00:00:52.489 + declare -A nvme_files 00:00:52.489 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.489 + nvme_files['nvme.img']=5G 00:00:52.489 + nvme_files['nvme-cmb.img']=5G 00:00:52.489 + nvme_files['nvme-multi0.img']=4G 00:00:52.489 + nvme_files['nvme-multi1.img']=4G 00:00:52.489 + nvme_files['nvme-multi2.img']=4G 00:00:52.489 + nvme_files['nvme-openstack.img']=8G 00:00:52.489 + nvme_files['nvme-zns.img']=5G 00:00:52.489 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.489 + (( SPDK_TEST_FTL == 1 )) 00:00:52.489 + nvme_files["nvme-ftl.img"]=6G 00:00:52.489 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.489 + nvme_files["nvme-fdp.img"]=1G 00:00:52.489 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.489 + for nvme in "${!nvme_files[@]}" 00:00:52.489 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:52.489 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.489 + for nvme in "${!nvme_files[@]}" 00:00:52.489 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:00:52.489 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:52.489 + for nvme in "${!nvme_files[@]}" 00:00:52.489 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:52.489 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.489 + for nvme in "${!nvme_files[@]}" 00:00:52.489 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:52.747 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.747 + for nvme in "${!nvme_files[@]}" 00:00:52.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:52.747 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.747 + for nvme in "${!nvme_files[@]}" 00:00:52.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:52.747 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.747 + for nvme in "${!nvme_files[@]}" 00:00:52.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:52.747 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.747 + for nvme in "${!nvme_files[@]}" 00:00:52.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:00:52.747 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:52.747 + for nvme in "${!nvme_files[@]}" 00:00:52.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:53.005 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.005 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:53.005 + echo 'End stage prepare_nvme.sh' 00:00:53.005 End stage prepare_nvme.sh 00:00:53.017 [Pipeline] sh 00:00:53.297 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.297 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:53.297 00:00:53.297 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:53.297 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:53.297 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:53.297 HELP=0 00:00:53.297 DRY_RUN=0 00:00:53.297 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:00:53.297 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:53.297 NVME_AUTO_CREATE=0 00:00:53.297 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:00:53.297 NVME_CMB=,,,, 00:00:53.297 NVME_PMR=,,,, 00:00:53.297 NVME_ZNS=,,,, 00:00:53.297 NVME_MS=true,,,, 00:00:53.297 NVME_FDP=,,,on, 00:00:53.297 SPDK_VAGRANT_DISTRO=fedora38 00:00:53.297 SPDK_VAGRANT_VMCPU=10 00:00:53.297 SPDK_VAGRANT_VMRAM=12288 00:00:53.297 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.297 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:53.297 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.297 SPDK_OPENSTACK_NETWORK=0 00:00:53.297 VAGRANT_PACKAGE_BOX=0 00:00:53.297 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:53.297 FORCE_DISTRO=true 00:00:53.297 VAGRANT_BOX_VERSION= 00:00:53.297 EXTRA_VAGRANTFILES= 00:00:53.297 NIC_MODEL=e1000 00:00:53.297 00:00:53.297 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:00:53.297 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:55.829 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.395 ==> default: Creating image (snapshot of base box volume). 00:00:56.395 ==> default: Creating domain with the following settings... 00:00:56.395 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721796610_949fe9e073dedc136e68 00:00:56.395 ==> default: -- Domain type: kvm 00:00:56.395 ==> default: -- Cpus: 10 00:00:56.395 ==> default: -- Feature: acpi 00:00:56.395 ==> default: -- Feature: apic 00:00:56.395 ==> default: -- Feature: pae 00:00:56.395 ==> default: -- Memory: 12288M 00:00:56.395 ==> default: -- Memory Backing: hugepages: 00:00:56.395 ==> default: -- Management MAC: 00:00:56.395 ==> default: -- Loader: 00:00:56.395 ==> default: -- Nvram: 00:00:56.395 ==> default: -- Base box: spdk/fedora38 00:00:56.395 ==> default: -- Storage pool: default 00:00:56.395 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721796610_949fe9e073dedc136e68.img (20G) 00:00:56.395 ==> default: -- Volume Cache: default 00:00:56.395 ==> default: -- Kernel: 00:00:56.395 ==> default: -- Initrd: 00:00:56.395 ==> default: -- Graphics Type: vnc 00:00:56.395 ==> default: -- Graphics Port: -1 00:00:56.395 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.395 ==> default: -- Graphics Password: Not defined 00:00:56.395 ==> default: -- Video Type: cirrus 00:00:56.395 ==> default: -- Video VRAM: 9216 00:00:56.395 ==> default: -- Sound Type: 00:00:56.395 ==> default: -- Keymap: en-us 00:00:56.395 ==> default: -- TPM Path: 00:00:56.395 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.395 ==> default: -- Command line args: 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.395 ==> default: -> value=-drive, 00:00:56.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.395 ==> default: -> value=-drive, 00:00:56.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:56.395 ==> default: -> value=-drive, 00:00:56.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.395 ==> default: -> value=-drive, 00:00:56.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.395 ==> default: -> value=-drive, 00:00:56.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:56.395 ==> default: -> value=-device, 00:00:56.395 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.395 ==> default: -> value=-device, 00:00:56.396 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:56.396 ==> default: -> value=-device, 00:00:56.396 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:56.396 ==> default: -> value=-drive, 00:00:56.396 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:56.396 ==> default: -> value=-device, 00:00:56.396 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.654 ==> default: Creating shared folders metadata... 00:00:56.654 ==> default: Starting domain. 00:00:58.030 ==> default: Waiting for domain to get an IP address... 00:01:12.905 ==> default: Waiting for SSH to become available... 00:01:14.833 ==> default: Configuring and enabling network interfaces... 00:01:19.024 default: SSH address: 192.168.121.142:22 00:01:19.024 default: SSH username: vagrant 00:01:19.024 default: SSH auth method: private key 00:01:20.926 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:29.040 ==> default: Mounting SSHFS shared folder... 00:01:29.975 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.975 ==> default: Checking Mount.. 00:01:31.352 ==> default: Folder Successfully Mounted! 00:01:31.352 ==> default: Running provisioner: file... 00:01:32.329 default: ~/.gitconfig => .gitconfig 00:01:32.586 00:01:32.586 SUCCESS! 00:01:32.586 00:01:32.586 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:32.586 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:32.586 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:32.586 00:01:32.595 [Pipeline] } 00:01:32.612 [Pipeline] // stage 00:01:32.621 [Pipeline] dir 00:01:32.621 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:32.623 [Pipeline] { 00:01:32.637 [Pipeline] catchError 00:01:32.638 [Pipeline] { 00:01:32.652 [Pipeline] sh 00:01:32.930 + vagrant ssh-config --host vagrant 00:01:32.930 + sed -ne /^Host/,$p 00:01:32.930 + tee ssh_conf 00:01:36.212 Host vagrant 00:01:36.212 HostName 192.168.121.142 00:01:36.212 User vagrant 00:01:36.212 Port 22 00:01:36.212 UserKnownHostsFile /dev/null 00:01:36.212 StrictHostKeyChecking no 00:01:36.212 PasswordAuthentication no 00:01:36.212 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:36.212 IdentitiesOnly yes 00:01:36.212 LogLevel FATAL 00:01:36.212 ForwardAgent yes 00:01:36.212 ForwardX11 yes 00:01:36.212 00:01:36.224 [Pipeline] withEnv 00:01:36.227 [Pipeline] { 00:01:36.241 [Pipeline] sh 00:01:36.519 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:36.519 source /etc/os-release 00:01:36.519 [[ -e /image.version ]] && img=$(< /image.version) 00:01:36.519 # Minimal, systemd-like check. 00:01:36.519 if [[ -e /.dockerenv ]]; then 00:01:36.519 # Clear garbage from the node's name: 00:01:36.519 # agt-er_autotest_547-896 -> autotest_547-896 00:01:36.519 # $HOSTNAME is the actual container id 00:01:36.519 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:36.519 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:36.519 # We can assume this is a mount from a host where container is running, 00:01:36.519 # so fetch its hostname to easily identify the target swarm worker. 00:01:36.519 container="$(< /etc/hostname) ($agent)" 00:01:36.519 else 00:01:36.519 # Fallback 00:01:36.519 container=$agent 00:01:36.519 fi 00:01:36.519 fi 00:01:36.519 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:36.519 00:01:36.530 [Pipeline] } 00:01:36.549 [Pipeline] // withEnv 00:01:36.557 [Pipeline] setCustomBuildProperty 00:01:36.573 [Pipeline] stage 00:01:36.575 [Pipeline] { (Tests) 00:01:36.594 [Pipeline] sh 00:01:36.872 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:37.142 [Pipeline] sh 00:01:37.419 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:37.692 [Pipeline] timeout 00:01:37.692 Timeout set to expire in 40 min 00:01:37.694 [Pipeline] { 00:01:37.710 [Pipeline] sh 00:01:37.987 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:38.554 HEAD is now at 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:38.566 [Pipeline] sh 00:01:38.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:38.938 [Pipeline] sh 00:01:39.210 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:39.481 [Pipeline] sh 00:01:39.757 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:40.015 ++ readlink -f spdk_repo 00:01:40.015 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:40.015 + [[ -n /home/vagrant/spdk_repo ]] 00:01:40.015 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:40.015 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:40.015 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:40.015 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:40.015 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:40.015 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:40.015 + cd /home/vagrant/spdk_repo 00:01:40.015 + source /etc/os-release 00:01:40.015 ++ NAME='Fedora Linux' 00:01:40.015 ++ VERSION='38 (Cloud Edition)' 00:01:40.015 ++ ID=fedora 00:01:40.015 ++ VERSION_ID=38 00:01:40.015 ++ VERSION_CODENAME= 00:01:40.015 ++ PLATFORM_ID=platform:f38 00:01:40.015 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:40.015 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.015 ++ LOGO=fedora-logo-icon 00:01:40.015 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:40.015 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.015 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:40.015 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.015 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.015 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.015 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:40.015 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.015 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:40.015 ++ SUPPORT_END=2024-05-14 00:01:40.015 ++ VARIANT='Cloud Edition' 00:01:40.015 ++ VARIANT_ID=cloud 00:01:40.015 + uname -a 00:01:40.015 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:40.015 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:40.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:40.531 Hugepages 00:01:40.531 node hugesize free / total 00:01:40.531 node0 1048576kB 0 / 0 00:01:40.531 node0 2048kB 0 / 0 00:01:40.531 00:01:40.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.531 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:40.531 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:40.531 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:40.789 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:40.789 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:40.789 + rm -f /tmp/spdk-ld-path 00:01:40.789 + source autorun-spdk.conf 00:01:40.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.789 ++ SPDK_TEST_NVME=1 00:01:40.789 ++ SPDK_TEST_FTL=1 00:01:40.789 ++ SPDK_TEST_ISAL=1 00:01:40.789 ++ SPDK_RUN_ASAN=1 00:01:40.789 ++ SPDK_RUN_UBSAN=1 00:01:40.789 ++ SPDK_TEST_XNVME=1 00:01:40.789 ++ SPDK_TEST_NVME_FDP=1 00:01:40.789 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.789 ++ RUN_NIGHTLY=1 00:01:40.789 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.789 + [[ -n '' ]] 00:01:40.789 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:40.789 + for M in /var/spdk/build-*-manifest.txt 00:01:40.789 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.789 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.789 + for M in /var/spdk/build-*-manifest.txt 00:01:40.789 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.789 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.789 ++ uname 00:01:40.789 + [[ Linux == \L\i\n\u\x ]] 00:01:40.789 + sudo dmesg -T 00:01:40.789 + sudo dmesg --clear 00:01:40.789 + dmesg_pid=5212 00:01:40.789 + sudo dmesg -Tw 00:01:40.789 + [[ Fedora Linux == FreeBSD ]] 00:01:40.789 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.789 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.789 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.789 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.789 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.789 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.789 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.789 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.789 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.789 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.789 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.789 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.789 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.789 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.789 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.789 Test configuration: 00:01:40.789 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.789 SPDK_TEST_NVME=1 00:01:40.789 SPDK_TEST_FTL=1 00:01:40.789 SPDK_TEST_ISAL=1 00:01:40.789 SPDK_RUN_ASAN=1 00:01:40.789 SPDK_RUN_UBSAN=1 00:01:40.789 SPDK_TEST_XNVME=1 00:01:40.789 SPDK_TEST_NVME_FDP=1 00:01:40.789 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.789 RUN_NIGHTLY=1 04:50:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:40.789 04:50:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.789 04:50:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.789 04:50:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.790 04:50:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.790 04:50:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.790 04:50:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.790 04:50:55 -- paths/export.sh@5 -- $ export PATH 00:01:40.790 04:50:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.790 04:50:55 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:40.790 04:50:55 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:40.790 04:50:55 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721796655.XXXXXX 00:01:40.790 04:50:55 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721796655.sWcIyj 00:01:40.790 04:50:55 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:40.790 04:50:55 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:40.790 04:50:55 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:40.790 04:50:55 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:40.790 04:50:55 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.790 04:50:55 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:40.790 04:50:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:40.790 04:50:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.048 04:50:55 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:41.048 04:50:55 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:41.048 04:50:55 -- pm/common@17 -- $ local monitor 00:01:41.048 04:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.048 04:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.048 04:50:55 -- pm/common@25 -- $ sleep 1 00:01:41.048 04:50:55 -- pm/common@21 -- $ date +%s 00:01:41.048 04:50:55 -- pm/common@21 -- $ date +%s 00:01:41.048 04:50:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721796655 00:01:41.048 04:50:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721796655 00:01:41.048 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721796655_collect-vmstat.pm.log 00:01:41.048 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721796655_collect-cpu-load.pm.log 00:01:41.982 04:50:56 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:41.982 04:50:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.982 04:50:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.982 04:50:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.982 04:50:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.982 Wed Jul 24 04:50:56 AM UTC 2024 00:01:41.982 04:50:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.982 v24.09-pre-309-g78cbcfdde 00:01:41.982 04:50:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:41.982 04:50:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:41.982 04:50:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.982 04:50:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.982 04:50:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.982 ************************************ 00:01:41.982 START TEST asan 00:01:41.982 ************************************ 00:01:41.982 using asan 00:01:41.982 04:50:56 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:41.982 00:01:41.982 real 0m0.000s 00:01:41.982 user 0m0.000s 00:01:41.982 sys 0m0.000s 00:01:41.982 04:50:56 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:41.982 04:50:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.982 ************************************ 00:01:41.982 END TEST asan 00:01:41.982 ************************************ 00:01:41.982 04:50:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.982 04:50:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.982 04:50:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.982 04:50:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.982 04:50:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.982 ************************************ 00:01:41.982 START TEST ubsan 00:01:41.982 ************************************ 00:01:41.982 using ubsan 00:01:41.982 04:50:56 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:41.982 00:01:41.982 real 0m0.000s 00:01:41.982 user 0m0.000s 00:01:41.982 sys 0m0.000s 00:01:41.982 04:50:56 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:41.982 04:50:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.982 ************************************ 00:01:41.982 END TEST ubsan 00:01:41.982 ************************************ 00:01:41.982 04:50:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.982 04:50:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.982 04:50:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.982 04:50:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:42.240 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:42.240 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:42.807 Using 'verbs' RDMA provider 00:01:55.972 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:10.853 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:10.853 Creating mk/config.mk...done. 00:02:10.853 Creating mk/cc.flags.mk...done. 00:02:10.853 Type 'make' to build. 00:02:10.853 04:51:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:10.853 04:51:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:10.853 04:51:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.853 04:51:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.853 ************************************ 00:02:10.853 START TEST make 00:02:10.853 ************************************ 00:02:10.853 04:51:23 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:10.853 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:10.853 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:10.853 meson setup builddir \ 00:02:10.853 -Dwith-libaio=enabled \ 00:02:10.853 -Dwith-liburing=enabled \ 00:02:10.853 -Dwith-libvfn=disabled \ 00:02:10.853 -Dwith-spdk=false && \ 00:02:10.853 meson compile -C builddir && \ 00:02:10.853 cd -) 00:02:10.853 make[1]: Nothing to be done for 'all'. 00:02:12.227 The Meson build system 00:02:12.227 Version: 1.3.1 00:02:12.227 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:12.227 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:12.227 Build type: native build 00:02:12.227 Project name: xnvme 00:02:12.227 Project version: 0.7.3 00:02:12.227 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.227 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.227 Host machine cpu family: x86_64 00:02:12.227 Host machine cpu: x86_64 00:02:12.227 Message: host_machine.system: linux 00:02:12.227 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:12.227 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:12.227 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.227 Run-time dependency threads found: YES 00:02:12.227 Has header "setupapi.h" : NO 00:02:12.227 Has header "linux/blkzoned.h" : YES 00:02:12.227 Has header "linux/blkzoned.h" : YES (cached) 00:02:12.227 Has header "libaio.h" : YES 00:02:12.227 Library aio found: YES 00:02:12.227 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.227 Run-time dependency liburing found: YES 2.2 00:02:12.227 Dependency libvfn skipped: feature with-libvfn disabled 00:02:12.227 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.227 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.227 Configuring xnvme_config.h using configuration 00:02:12.227 Configuring xnvme.spec using configuration 00:02:12.227 Run-time dependency bash-completion found: YES 2.11 00:02:12.227 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:12.227 Program cp found: YES (/usr/bin/cp) 00:02:12.227 Has header "winsock2.h" : NO 00:02:12.227 Has header "dbghelp.h" : NO 00:02:12.227 Library rpcrt4 found: NO 00:02:12.227 Library rt found: YES 00:02:12.227 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:12.227 Found CMake: /usr/bin/cmake (3.27.7) 00:02:12.227 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:12.227 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:12.227 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:12.227 Build targets in project: 32 00:02:12.227 00:02:12.227 xnvme 0.7.3 00:02:12.227 00:02:12.227 User defined options 00:02:12.227 with-libaio : enabled 00:02:12.227 with-liburing: enabled 00:02:12.227 with-libvfn : disabled 00:02:12.227 with-spdk : false 00:02:12.227 00:02:12.227 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.485 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:12.485 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:12.485 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:12.485 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:12.744 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:12.744 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:12.744 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.744 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.744 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:12.744 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.744 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:12.744 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:12.744 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:12.744 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:12.744 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:12.744 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:12.744 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:12.744 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:12.744 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:12.744 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:12.744 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:13.003 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:13.003 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:13.003 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:13.003 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:13.003 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:13.003 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:13.003 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:13.003 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:13.003 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:13.003 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:13.003 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:13.003 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:13.003 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:13.003 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:13.003 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:13.003 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:13.003 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:13.003 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:13.003 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:13.003 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:13.003 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:13.003 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:13.003 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:13.003 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:13.003 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:13.003 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:13.003 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:13.003 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:13.003 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:13.003 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:13.003 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:13.003 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:13.261 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:13.261 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:13.261 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:13.261 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:13.261 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:13.261 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:13.261 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:13.261 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:13.261 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:13.261 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:13.261 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:13.261 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:13.261 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:13.261 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:13.261 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:13.261 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:13.519 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:13.519 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:13.519 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:13.519 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:13.519 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:13.519 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:13.519 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:13.519 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:13.519 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:13.519 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:13.519 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:13.519 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:13.519 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:13.519 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:13.519 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:13.777 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:13.777 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:13.777 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:13.777 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:13.777 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:13.778 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:13.778 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:13.778 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:13.778 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:13.778 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:13.778 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:13.778 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:13.778 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:13.778 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:13.778 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:13.778 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:13.778 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:13.778 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:14.036 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:14.036 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:14.036 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:14.036 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:14.036 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:14.036 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:14.036 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:14.036 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:14.036 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:14.036 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:14.036 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:14.036 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:14.036 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:14.036 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:14.036 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:14.036 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:14.036 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:14.036 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:14.036 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:14.036 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:14.036 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:14.036 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:14.036 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:14.036 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:14.036 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:14.294 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:14.294 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:14.294 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:14.294 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:14.294 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:14.294 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:14.294 [133/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:14.294 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:14.294 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:14.294 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:14.294 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:14.294 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:14.294 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:14.294 [140/203] Linking target lib/libxnvme.so 00:02:14.294 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:14.294 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:14.294 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:14.552 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:14.552 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:14.552 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:14.552 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:14.552 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:14.552 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:14.552 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:14.552 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:14.552 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:14.552 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:14.552 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:14.552 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:14.552 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:14.810 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:14.810 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:14.810 [159/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:14.810 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:14.810 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:14.810 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:14.810 [163/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:14.810 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:14.810 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:14.810 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:14.810 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:14.810 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:15.069 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:15.069 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:15.069 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:15.069 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:15.069 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:15.069 [174/203] Linking static target lib/libxnvme.a 00:02:15.069 [175/203] Linking target tests/xnvme_tests_lblk 00:02:15.069 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:15.069 [177/203] Linking target tests/xnvme_tests_cli 00:02:15.327 [178/203] Linking target tests/xnvme_tests_xnvme_file 00:02:15.327 [179/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:15.327 [180/203] Linking target tests/xnvme_tests_enum 00:02:15.327 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:15.327 [182/203] Linking target tests/xnvme_tests_scc 00:02:15.327 [183/203] Linking target tests/xnvme_tests_znd_append 00:02:15.327 [184/203] Linking target tests/xnvme_tests_buf 00:02:15.327 [185/203] Linking target tests/xnvme_tests_kvs 00:02:15.327 [186/203] Linking target tests/xnvme_tests_map 00:02:15.327 [187/203] Linking target tools/xnvme 00:02:15.327 [188/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:15.327 [189/203] Linking target tests/xnvme_tests_ioworker 00:02:15.327 [190/203] Linking target tests/xnvme_tests_znd_state 00:02:15.327 [191/203] Linking target tools/xnvme_file 00:02:15.327 [192/203] Linking target tools/lblk 00:02:15.327 [193/203] Linking target tools/xdd 00:02:15.327 [194/203] Linking target tools/kvs 00:02:15.327 [195/203] Linking target examples/xnvme_dev 00:02:15.327 [196/203] Linking target tools/zoned 00:02:15.327 [197/203] Linking target examples/xnvme_enum 00:02:15.327 [198/203] Linking target examples/xnvme_single_sync 00:02:15.327 [199/203] Linking target examples/xnvme_hello 00:02:15.327 [200/203] Linking target examples/xnvme_io_async 00:02:15.327 [201/203] Linking target examples/zoned_io_async 00:02:15.327 [202/203] Linking target examples/xnvme_single_async 00:02:15.327 [203/203] Linking target examples/zoned_io_sync 00:02:15.327 INFO: autodetecting backend as ninja 00:02:15.327 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:15.327 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:21.887 The Meson build system 00:02:21.887 Version: 1.3.1 00:02:21.887 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.887 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.887 Build type: native build 00:02:21.887 Program cat found: YES (/usr/bin/cat) 00:02:21.887 Project name: DPDK 00:02:21.887 Project version: 24.03.0 00:02:21.887 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:21.887 C linker for the host machine: cc ld.bfd 2.39-16 00:02:21.887 Host machine cpu family: x86_64 00:02:21.887 Host machine cpu: x86_64 00:02:21.887 Message: ## Building in Developer Mode ## 00:02:21.887 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.887 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.887 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.887 Program python3 found: YES (/usr/bin/python3) 00:02:21.887 Program cat found: YES (/usr/bin/cat) 00:02:21.887 Compiler for C supports arguments -march=native: YES 00:02:21.887 Checking for size of "void *" : 8 00:02:21.887 Checking for size of "void *" : 8 (cached) 00:02:21.887 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:21.887 Library m found: YES 00:02:21.887 Library numa found: YES 00:02:21.887 Has header "numaif.h" : YES 00:02:21.887 Library fdt found: NO 00:02:21.887 Library execinfo found: NO 00:02:21.887 Has header "execinfo.h" : YES 00:02:21.887 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:21.887 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.887 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.887 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.887 Run-time dependency openssl found: YES 3.0.9 00:02:21.887 Run-time dependency libpcap found: YES 1.10.4 00:02:21.887 Has header "pcap.h" with dependency libpcap: YES 00:02:21.887 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.887 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.887 Compiler for C supports arguments -Wformat: YES 00:02:21.887 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.887 Compiler for C supports arguments -Wformat-security: NO 00:02:21.887 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.887 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.887 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.887 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.887 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.887 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.887 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.887 Compiler for C supports arguments -Wundef: YES 00:02:21.887 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.887 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.887 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.887 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.887 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.887 Program objdump found: YES (/usr/bin/objdump) 00:02:21.887 Compiler for C supports arguments -mavx512f: YES 00:02:21.887 Checking if "AVX512 checking" compiles: YES 00:02:21.887 Fetching value of define "__SSE4_2__" : 1 00:02:21.887 Fetching value of define "__AES__" : 1 00:02:21.887 Fetching value of define "__AVX__" : 1 00:02:21.887 Fetching value of define "__AVX2__" : 1 00:02:21.887 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.887 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.887 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.887 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.887 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.887 Fetching value of define "__PCLMUL__" : 1 00:02:21.887 Fetching value of define "__RDRND__" : 1 00:02:21.887 Fetching value of define "__RDSEED__" : 1 00:02:21.887 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.887 Fetching value of define "__znver1__" : (undefined) 00:02:21.887 Fetching value of define "__znver2__" : (undefined) 00:02:21.887 Fetching value of define "__znver3__" : (undefined) 00:02:21.887 Fetching value of define "__znver4__" : (undefined) 00:02:21.888 Library asan found: YES 00:02:21.888 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.888 Message: lib/log: Defining dependency "log" 00:02:21.888 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.888 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.888 Library rt found: YES 00:02:21.888 Checking for function "getentropy" : NO 00:02:21.888 Message: lib/eal: Defining dependency "eal" 00:02:21.888 Message: lib/ring: Defining dependency "ring" 00:02:21.888 Message: lib/rcu: Defining dependency "rcu" 00:02:21.888 Message: lib/mempool: Defining dependency "mempool" 00:02:21.888 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.888 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.888 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.888 Compiler for C supports arguments -mpclmul: YES 00:02:21.888 Compiler for C supports arguments -maes: YES 00:02:21.888 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.888 Compiler for C supports arguments -mavx512bw: YES 00:02:21.888 Compiler for C supports arguments -mavx512dq: YES 00:02:21.888 Compiler for C supports arguments -mavx512vl: YES 00:02:21.888 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.888 Compiler for C supports arguments -mavx2: YES 00:02:21.888 Compiler for C supports arguments -mavx: YES 00:02:21.888 Message: lib/net: Defining dependency "net" 00:02:21.888 Message: lib/meter: Defining dependency "meter" 00:02:21.888 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.888 Message: lib/pci: Defining dependency "pci" 00:02:21.888 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.888 Message: lib/hash: Defining dependency "hash" 00:02:21.888 Message: lib/timer: Defining dependency "timer" 00:02:21.888 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.888 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.888 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.888 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.888 Message: lib/power: Defining dependency "power" 00:02:21.888 Message: lib/reorder: Defining dependency "reorder" 00:02:21.888 Message: lib/security: Defining dependency "security" 00:02:21.888 Has header "linux/userfaultfd.h" : YES 00:02:21.888 Has header "linux/vduse.h" : YES 00:02:21.888 Message: lib/vhost: Defining dependency "vhost" 00:02:21.888 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.888 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.888 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.888 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.888 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.888 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.888 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.888 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.888 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.888 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.888 Program doxygen found: YES (/usr/bin/doxygen) 00:02:21.888 Configuring doxy-api-html.conf using configuration 00:02:21.888 Configuring doxy-api-man.conf using configuration 00:02:21.888 Program mandb found: YES (/usr/bin/mandb) 00:02:21.888 Program sphinx-build found: NO 00:02:21.888 Configuring rte_build_config.h using configuration 00:02:21.888 Message: 00:02:21.888 ================= 00:02:21.888 Applications Enabled 00:02:21.888 ================= 00:02:21.888 00:02:21.888 apps: 00:02:21.888 00:02:21.888 00:02:21.888 Message: 00:02:21.888 ================= 00:02:21.888 Libraries Enabled 00:02:21.888 ================= 00:02:21.888 00:02:21.888 libs: 00:02:21.888 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.888 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.888 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.888 00:02:21.888 Message: 00:02:21.888 =============== 00:02:21.888 Drivers Enabled 00:02:21.888 =============== 00:02:21.888 00:02:21.888 common: 00:02:21.888 00:02:21.888 bus: 00:02:21.888 pci, vdev, 00:02:21.888 mempool: 00:02:21.888 ring, 00:02:21.888 dma: 00:02:21.888 00:02:21.888 net: 00:02:21.888 00:02:21.888 crypto: 00:02:21.888 00:02:21.888 compress: 00:02:21.888 00:02:21.888 vdpa: 00:02:21.888 00:02:21.888 00:02:21.888 Message: 00:02:21.888 ================= 00:02:21.888 Content Skipped 00:02:21.888 ================= 00:02:21.888 00:02:21.888 apps: 00:02:21.888 dumpcap: explicitly disabled via build config 00:02:21.888 graph: explicitly disabled via build config 00:02:21.888 pdump: explicitly disabled via build config 00:02:21.888 proc-info: explicitly disabled via build config 00:02:21.888 test-acl: explicitly disabled via build config 00:02:21.888 test-bbdev: explicitly disabled via build config 00:02:21.888 test-cmdline: explicitly disabled via build config 00:02:21.888 test-compress-perf: explicitly disabled via build config 00:02:21.888 test-crypto-perf: explicitly disabled via build config 00:02:21.888 test-dma-perf: explicitly disabled via build config 00:02:21.888 test-eventdev: explicitly disabled via build config 00:02:21.888 test-fib: explicitly disabled via build config 00:02:21.888 test-flow-perf: explicitly disabled via build config 00:02:21.888 test-gpudev: explicitly disabled via build config 00:02:21.888 test-mldev: explicitly disabled via build config 00:02:21.888 test-pipeline: explicitly disabled via build config 00:02:21.888 test-pmd: explicitly disabled via build config 00:02:21.888 test-regex: explicitly disabled via build config 00:02:21.888 test-sad: explicitly disabled via build config 00:02:21.888 test-security-perf: explicitly disabled via build config 00:02:21.888 00:02:21.888 libs: 00:02:21.888 argparse: explicitly disabled via build config 00:02:21.888 metrics: explicitly disabled via build config 00:02:21.888 acl: explicitly disabled via build config 00:02:21.888 bbdev: explicitly disabled via build config 00:02:21.888 bitratestats: explicitly disabled via build config 00:02:21.888 bpf: explicitly disabled via build config 00:02:21.888 cfgfile: explicitly disabled via build config 00:02:21.888 distributor: explicitly disabled via build config 00:02:21.888 efd: explicitly disabled via build config 00:02:21.888 eventdev: explicitly disabled via build config 00:02:21.888 dispatcher: explicitly disabled via build config 00:02:21.888 gpudev: explicitly disabled via build config 00:02:21.888 gro: explicitly disabled via build config 00:02:21.888 gso: explicitly disabled via build config 00:02:21.888 ip_frag: explicitly disabled via build config 00:02:21.888 jobstats: explicitly disabled via build config 00:02:21.888 latencystats: explicitly disabled via build config 00:02:21.888 lpm: explicitly disabled via build config 00:02:21.888 member: explicitly disabled via build config 00:02:21.888 pcapng: explicitly disabled via build config 00:02:21.888 rawdev: explicitly disabled via build config 00:02:21.888 regexdev: explicitly disabled via build config 00:02:21.888 mldev: explicitly disabled via build config 00:02:21.888 rib: explicitly disabled via build config 00:02:21.888 sched: explicitly disabled via build config 00:02:21.888 stack: explicitly disabled via build config 00:02:21.888 ipsec: explicitly disabled via build config 00:02:21.888 pdcp: explicitly disabled via build config 00:02:21.888 fib: explicitly disabled via build config 00:02:21.888 port: explicitly disabled via build config 00:02:21.888 pdump: explicitly disabled via build config 00:02:21.888 table: explicitly disabled via build config 00:02:21.888 pipeline: explicitly disabled via build config 00:02:21.888 graph: explicitly disabled via build config 00:02:21.888 node: explicitly disabled via build config 00:02:21.888 00:02:21.888 drivers: 00:02:21.888 common/cpt: not in enabled drivers build config 00:02:21.888 common/dpaax: not in enabled drivers build config 00:02:21.888 common/iavf: not in enabled drivers build config 00:02:21.888 common/idpf: not in enabled drivers build config 00:02:21.888 common/ionic: not in enabled drivers build config 00:02:21.888 common/mvep: not in enabled drivers build config 00:02:21.888 common/octeontx: not in enabled drivers build config 00:02:21.888 bus/auxiliary: not in enabled drivers build config 00:02:21.888 bus/cdx: not in enabled drivers build config 00:02:21.888 bus/dpaa: not in enabled drivers build config 00:02:21.888 bus/fslmc: not in enabled drivers build config 00:02:21.888 bus/ifpga: not in enabled drivers build config 00:02:21.888 bus/platform: not in enabled drivers build config 00:02:21.888 bus/uacce: not in enabled drivers build config 00:02:21.888 bus/vmbus: not in enabled drivers build config 00:02:21.888 common/cnxk: not in enabled drivers build config 00:02:21.888 common/mlx5: not in enabled drivers build config 00:02:21.888 common/nfp: not in enabled drivers build config 00:02:21.888 common/nitrox: not in enabled drivers build config 00:02:21.888 common/qat: not in enabled drivers build config 00:02:21.888 common/sfc_efx: not in enabled drivers build config 00:02:21.888 mempool/bucket: not in enabled drivers build config 00:02:21.888 mempool/cnxk: not in enabled drivers build config 00:02:21.888 mempool/dpaa: not in enabled drivers build config 00:02:21.888 mempool/dpaa2: not in enabled drivers build config 00:02:21.888 mempool/octeontx: not in enabled drivers build config 00:02:21.888 mempool/stack: not in enabled drivers build config 00:02:21.888 dma/cnxk: not in enabled drivers build config 00:02:21.888 dma/dpaa: not in enabled drivers build config 00:02:21.888 dma/dpaa2: not in enabled drivers build config 00:02:21.888 dma/hisilicon: not in enabled drivers build config 00:02:21.888 dma/idxd: not in enabled drivers build config 00:02:21.888 dma/ioat: not in enabled drivers build config 00:02:21.889 dma/skeleton: not in enabled drivers build config 00:02:21.889 net/af_packet: not in enabled drivers build config 00:02:21.889 net/af_xdp: not in enabled drivers build config 00:02:21.889 net/ark: not in enabled drivers build config 00:02:21.889 net/atlantic: not in enabled drivers build config 00:02:21.889 net/avp: not in enabled drivers build config 00:02:21.889 net/axgbe: not in enabled drivers build config 00:02:21.889 net/bnx2x: not in enabled drivers build config 00:02:21.889 net/bnxt: not in enabled drivers build config 00:02:21.889 net/bonding: not in enabled drivers build config 00:02:21.889 net/cnxk: not in enabled drivers build config 00:02:21.889 net/cpfl: not in enabled drivers build config 00:02:21.889 net/cxgbe: not in enabled drivers build config 00:02:21.889 net/dpaa: not in enabled drivers build config 00:02:21.889 net/dpaa2: not in enabled drivers build config 00:02:21.889 net/e1000: not in enabled drivers build config 00:02:21.889 net/ena: not in enabled drivers build config 00:02:21.889 net/enetc: not in enabled drivers build config 00:02:21.889 net/enetfec: not in enabled drivers build config 00:02:21.889 net/enic: not in enabled drivers build config 00:02:21.889 net/failsafe: not in enabled drivers build config 00:02:21.889 net/fm10k: not in enabled drivers build config 00:02:21.889 net/gve: not in enabled drivers build config 00:02:21.889 net/hinic: not in enabled drivers build config 00:02:21.889 net/hns3: not in enabled drivers build config 00:02:21.889 net/i40e: not in enabled drivers build config 00:02:21.889 net/iavf: not in enabled drivers build config 00:02:21.889 net/ice: not in enabled drivers build config 00:02:21.889 net/idpf: not in enabled drivers build config 00:02:21.889 net/igc: not in enabled drivers build config 00:02:21.889 net/ionic: not in enabled drivers build config 00:02:21.889 net/ipn3ke: not in enabled drivers build config 00:02:21.889 net/ixgbe: not in enabled drivers build config 00:02:21.889 net/mana: not in enabled drivers build config 00:02:21.889 net/memif: not in enabled drivers build config 00:02:21.889 net/mlx4: not in enabled drivers build config 00:02:21.889 net/mlx5: not in enabled drivers build config 00:02:21.889 net/mvneta: not in enabled drivers build config 00:02:21.889 net/mvpp2: not in enabled drivers build config 00:02:21.889 net/netvsc: not in enabled drivers build config 00:02:21.889 net/nfb: not in enabled drivers build config 00:02:21.889 net/nfp: not in enabled drivers build config 00:02:21.889 net/ngbe: not in enabled drivers build config 00:02:21.889 net/null: not in enabled drivers build config 00:02:21.889 net/octeontx: not in enabled drivers build config 00:02:21.889 net/octeon_ep: not in enabled drivers build config 00:02:21.889 net/pcap: not in enabled drivers build config 00:02:21.889 net/pfe: not in enabled drivers build config 00:02:21.889 net/qede: not in enabled drivers build config 00:02:21.889 net/ring: not in enabled drivers build config 00:02:21.889 net/sfc: not in enabled drivers build config 00:02:21.889 net/softnic: not in enabled drivers build config 00:02:21.889 net/tap: not in enabled drivers build config 00:02:21.889 net/thunderx: not in enabled drivers build config 00:02:21.889 net/txgbe: not in enabled drivers build config 00:02:21.889 net/vdev_netvsc: not in enabled drivers build config 00:02:21.889 net/vhost: not in enabled drivers build config 00:02:21.889 net/virtio: not in enabled drivers build config 00:02:21.889 net/vmxnet3: not in enabled drivers build config 00:02:21.889 raw/*: missing internal dependency, "rawdev" 00:02:21.889 crypto/armv8: not in enabled drivers build config 00:02:21.889 crypto/bcmfs: not in enabled drivers build config 00:02:21.889 crypto/caam_jr: not in enabled drivers build config 00:02:21.889 crypto/ccp: not in enabled drivers build config 00:02:21.889 crypto/cnxk: not in enabled drivers build config 00:02:21.889 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.889 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.889 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.889 crypto/mlx5: not in enabled drivers build config 00:02:21.889 crypto/mvsam: not in enabled drivers build config 00:02:21.889 crypto/nitrox: not in enabled drivers build config 00:02:21.889 crypto/null: not in enabled drivers build config 00:02:21.889 crypto/octeontx: not in enabled drivers build config 00:02:21.889 crypto/openssl: not in enabled drivers build config 00:02:21.889 crypto/scheduler: not in enabled drivers build config 00:02:21.889 crypto/uadk: not in enabled drivers build config 00:02:21.889 crypto/virtio: not in enabled drivers build config 00:02:21.889 compress/isal: not in enabled drivers build config 00:02:21.889 compress/mlx5: not in enabled drivers build config 00:02:21.889 compress/nitrox: not in enabled drivers build config 00:02:21.889 compress/octeontx: not in enabled drivers build config 00:02:21.889 compress/zlib: not in enabled drivers build config 00:02:21.889 regex/*: missing internal dependency, "regexdev" 00:02:21.889 ml/*: missing internal dependency, "mldev" 00:02:21.889 vdpa/ifc: not in enabled drivers build config 00:02:21.889 vdpa/mlx5: not in enabled drivers build config 00:02:21.889 vdpa/nfp: not in enabled drivers build config 00:02:21.889 vdpa/sfc: not in enabled drivers build config 00:02:21.889 event/*: missing internal dependency, "eventdev" 00:02:21.889 baseband/*: missing internal dependency, "bbdev" 00:02:21.889 gpu/*: missing internal dependency, "gpudev" 00:02:21.889 00:02:21.889 00:02:21.889 Build targets in project: 85 00:02:21.889 00:02:21.889 DPDK 24.03.0 00:02:21.889 00:02:21.889 User defined options 00:02:21.889 buildtype : debug 00:02:21.889 default_library : shared 00:02:21.889 libdir : lib 00:02:21.889 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.889 b_sanitize : address 00:02:21.889 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.889 c_link_args : 00:02:21.889 cpu_instruction_set: native 00:02:21.889 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.889 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.889 enable_docs : false 00:02:21.889 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:21.889 enable_kmods : false 00:02:21.889 max_lcores : 128 00:02:21.889 tests : false 00:02:21.889 00:02:21.889 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.157 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:22.157 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.157 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.157 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.157 [4/268] Linking static target lib/librte_kvargs.a 00:02:22.414 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.414 [6/268] Linking static target lib/librte_log.a 00:02:22.672 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.930 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.930 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.930 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.188 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.188 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.188 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.188 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.188 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.188 [16/268] Linking static target lib/librte_telemetry.a 00:02:23.446 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.446 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.446 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.446 [20/268] Linking target lib/librte_log.so.24.1 00:02:23.704 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:23.704 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:23.962 [23/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.962 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.962 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.962 [26/268] Linking target lib/librte_telemetry.so.24.1 00:02:23.962 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.962 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.962 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.962 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:23.962 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.220 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.220 [33/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.220 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.477 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.477 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.478 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.735 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.994 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.994 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.994 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.994 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.994 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.252 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.252 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.252 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.252 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.510 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.510 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.768 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.768 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.026 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.026 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.026 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.026 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.026 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.284 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.284 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.284 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.543 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.543 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.802 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.802 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.802 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.802 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.802 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.061 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.320 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.320 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.579 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.579 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.579 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.579 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.579 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.579 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.838 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.838 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.838 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.838 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.096 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.355 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.613 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.613 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.613 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.871 [85/268] Linking static target lib/librte_eal.a 00:02:28.871 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.871 [87/268] Linking static target lib/librte_ring.a 00:02:28.871 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.871 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.133 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.391 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.391 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.391 [93/268] Linking static target lib/librte_rcu.a 00:02:29.391 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.391 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.391 [96/268] Linking static target lib/librte_mempool.a 00:02:29.391 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.650 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.908 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.908 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.908 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.167 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.167 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.167 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.426 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.684 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.684 [107/268] Linking static target lib/librte_net.a 00:02:30.684 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.684 [109/268] Linking static target lib/librte_mbuf.a 00:02:30.941 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.941 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.941 [112/268] Linking static target lib/librte_meter.a 00:02:31.199 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.199 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.199 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.199 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.457 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.457 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.023 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.023 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.023 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.281 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.539 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.539 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.798 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.798 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.798 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.798 [128/268] Linking static target lib/librte_pci.a 00:02:32.798 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.798 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.798 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.798 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.056 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.056 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.056 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.056 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.056 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.314 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.314 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.314 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:33.314 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.314 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.314 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:33.314 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.314 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.572 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.572 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.572 [148/268] Linking static target lib/librte_cmdline.a 00:02:33.830 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.089 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.089 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.089 [152/268] Linking static target lib/librte_timer.a 00:02:34.089 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:34.347 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.347 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:34.605 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.605 [157/268] Linking static target lib/librte_ethdev.a 00:02:34.864 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.864 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.864 [160/268] Linking static target lib/librte_compressdev.a 00:02:34.864 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.864 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.864 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.864 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.122 [165/268] Linking static target lib/librte_hash.a 00:02:35.380 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.380 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.380 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.380 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.380 [170/268] Linking static target lib/librte_dmadev.a 00:02:35.638 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.638 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.638 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.913 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.183 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.183 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.183 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.441 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.441 [179/268] Linking static target lib/librte_cryptodev.a 00:02:36.441 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.441 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.441 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.441 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.441 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:37.007 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.007 [186/268] Linking static target lib/librte_power.a 00:02:37.265 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:37.265 [188/268] Linking static target lib/librte_reorder.a 00:02:37.265 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:37.265 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:37.523 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:37.523 [192/268] Linking static target lib/librte_security.a 00:02:37.781 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.781 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:37.781 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:38.039 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.296 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.296 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.554 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:38.554 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.812 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.812 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.812 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.070 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.070 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.327 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.327 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.327 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:39.327 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:39.585 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:39.585 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.585 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.585 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.585 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.585 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:39.585 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.585 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.585 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.844 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:39.844 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.844 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.844 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.102 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.102 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.102 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.102 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:40.102 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.037 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.037 [229/268] Linking target lib/librte_eal.so.24.1 00:02:41.037 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.295 [231/268] Linking target lib/librte_ring.so.24.1 00:02:41.295 [232/268] Linking target lib/librte_timer.so.24.1 00:02:41.295 [233/268] Linking target lib/librte_meter.so.24.1 00:02:41.295 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.295 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.295 [236/268] Linking target lib/librte_pci.so.24.1 00:02:41.295 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.295 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.295 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.295 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.295 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.295 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:41.295 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.295 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:41.552 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.552 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.552 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.552 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.552 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.810 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.810 [251/268] Linking target lib/librte_net.so.24.1 00:02:41.810 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:41.810 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.810 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.810 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:41.810 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.068 [257/268] Linking target lib/librte_hash.so.24.1 00:02:42.068 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.068 [259/268] Linking target lib/librte_security.so.24.1 00:02:42.068 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.635 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.919 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.919 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.919 [264/268] Linking target lib/librte_power.so.24.1 00:02:46.213 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.213 [266/268] Linking static target lib/librte_vhost.a 00:02:47.587 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.844 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:47.844 INFO: autodetecting backend as ninja 00:02:47.844 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:49.220 CC lib/ut_mock/mock.o 00:02:49.220 CC lib/ut/ut.o 00:02:49.220 CC lib/log/log.o 00:02:49.220 CC lib/log/log_flags.o 00:02:49.220 CC lib/log/log_deprecated.o 00:02:49.220 LIB libspdk_ut.a 00:02:49.220 LIB libspdk_ut_mock.a 00:02:49.220 SO libspdk_ut.so.2.0 00:02:49.220 SO libspdk_ut_mock.so.6.0 00:02:49.220 LIB libspdk_log.a 00:02:49.220 SYMLINK libspdk_ut.so 00:02:49.220 SO libspdk_log.so.7.0 00:02:49.220 SYMLINK libspdk_ut_mock.so 00:02:49.478 SYMLINK libspdk_log.so 00:02:49.478 CXX lib/trace_parser/trace.o 00:02:49.736 CC lib/ioat/ioat.o 00:02:49.736 CC lib/util/base64.o 00:02:49.736 CC lib/util/bit_array.o 00:02:49.736 CC lib/util/cpuset.o 00:02:49.736 CC lib/dma/dma.o 00:02:49.736 CC lib/util/crc32.o 00:02:49.736 CC lib/util/crc32c.o 00:02:49.736 CC lib/util/crc16.o 00:02:49.736 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.736 CC lib/util/crc32_ieee.o 00:02:49.736 CC lib/vfio_user/host/vfio_user.o 00:02:49.736 CC lib/util/crc64.o 00:02:49.736 CC lib/util/dif.o 00:02:49.994 LIB libspdk_dma.a 00:02:49.994 CC lib/util/fd.o 00:02:49.994 SO libspdk_dma.so.4.0 00:02:49.994 CC lib/util/fd_group.o 00:02:49.994 CC lib/util/file.o 00:02:49.994 CC lib/util/hexlify.o 00:02:49.994 SYMLINK libspdk_dma.so 00:02:49.994 CC lib/util/iov.o 00:02:49.994 LIB libspdk_ioat.a 00:02:49.994 SO libspdk_ioat.so.7.0 00:02:49.994 CC lib/util/math.o 00:02:49.994 LIB libspdk_vfio_user.a 00:02:49.994 CC lib/util/net.o 00:02:49.994 SYMLINK libspdk_ioat.so 00:02:49.994 CC lib/util/pipe.o 00:02:50.252 SO libspdk_vfio_user.so.5.0 00:02:50.252 CC lib/util/strerror_tls.o 00:02:50.252 CC lib/util/string.o 00:02:50.252 SYMLINK libspdk_vfio_user.so 00:02:50.252 CC lib/util/uuid.o 00:02:50.252 CC lib/util/xor.o 00:02:50.252 CC lib/util/zipf.o 00:02:50.511 LIB libspdk_util.a 00:02:50.770 SO libspdk_util.so.10.0 00:02:50.770 LIB libspdk_trace_parser.a 00:02:50.770 SO libspdk_trace_parser.so.5.0 00:02:50.770 SYMLINK libspdk_util.so 00:02:51.029 SYMLINK libspdk_trace_parser.so 00:02:51.029 CC lib/idxd/idxd.o 00:02:51.029 CC lib/rdma_provider/common.o 00:02:51.029 CC lib/idxd/idxd_user.o 00:02:51.029 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.029 CC lib/json/json_parse.o 00:02:51.029 CC lib/env_dpdk/env.o 00:02:51.029 CC lib/idxd/idxd_kernel.o 00:02:51.029 CC lib/rdma_utils/rdma_utils.o 00:02:51.029 CC lib/vmd/vmd.o 00:02:51.029 CC lib/conf/conf.o 00:02:51.296 CC lib/vmd/led.o 00:02:51.296 CC lib/json/json_util.o 00:02:51.296 LIB libspdk_rdma_provider.a 00:02:51.296 LIB libspdk_conf.a 00:02:51.296 SO libspdk_rdma_provider.so.6.0 00:02:51.296 SO libspdk_conf.so.6.0 00:02:51.296 CC lib/json/json_write.o 00:02:51.296 CC lib/env_dpdk/memory.o 00:02:51.296 LIB libspdk_rdma_utils.a 00:02:51.296 SYMLINK libspdk_rdma_provider.so 00:02:51.296 CC lib/env_dpdk/pci.o 00:02:51.296 SYMLINK libspdk_conf.so 00:02:51.296 CC lib/env_dpdk/init.o 00:02:51.296 CC lib/env_dpdk/threads.o 00:02:51.296 SO libspdk_rdma_utils.so.1.0 00:02:51.578 SYMLINK libspdk_rdma_utils.so 00:02:51.578 CC lib/env_dpdk/pci_ioat.o 00:02:51.578 CC lib/env_dpdk/pci_virtio.o 00:02:51.578 CC lib/env_dpdk/pci_vmd.o 00:02:51.578 CC lib/env_dpdk/pci_idxd.o 00:02:51.578 LIB libspdk_json.a 00:02:51.578 CC lib/env_dpdk/pci_event.o 00:02:51.837 CC lib/env_dpdk/sigbus_handler.o 00:02:51.837 SO libspdk_json.so.6.0 00:02:51.837 CC lib/env_dpdk/pci_dpdk.o 00:02:51.837 SYMLINK libspdk_json.so 00:02:51.837 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.837 LIB libspdk_idxd.a 00:02:51.837 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.837 SO libspdk_idxd.so.12.0 00:02:51.837 SYMLINK libspdk_idxd.so 00:02:51.837 LIB libspdk_vmd.a 00:02:52.096 SO libspdk_vmd.so.6.0 00:02:52.096 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.096 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.096 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.096 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.096 SYMLINK libspdk_vmd.so 00:02:52.355 LIB libspdk_jsonrpc.a 00:02:52.355 SO libspdk_jsonrpc.so.6.0 00:02:52.355 SYMLINK libspdk_jsonrpc.so 00:02:52.614 CC lib/rpc/rpc.o 00:02:52.872 LIB libspdk_rpc.a 00:02:52.872 LIB libspdk_env_dpdk.a 00:02:53.130 SO libspdk_rpc.so.6.0 00:02:53.130 SYMLINK libspdk_rpc.so 00:02:53.130 SO libspdk_env_dpdk.so.15.0 00:02:53.388 CC lib/keyring/keyring.o 00:02:53.388 CC lib/keyring/keyring_rpc.o 00:02:53.388 CC lib/trace/trace.o 00:02:53.388 CC lib/trace/trace_rpc.o 00:02:53.388 CC lib/trace/trace_flags.o 00:02:53.388 SYMLINK libspdk_env_dpdk.so 00:02:53.388 CC lib/notify/notify.o 00:02:53.388 CC lib/notify/notify_rpc.o 00:02:53.388 LIB libspdk_notify.a 00:02:53.647 SO libspdk_notify.so.6.0 00:02:53.647 LIB libspdk_keyring.a 00:02:53.647 SO libspdk_keyring.so.1.0 00:02:53.647 SYMLINK libspdk_notify.so 00:02:53.647 LIB libspdk_trace.a 00:02:53.647 SYMLINK libspdk_keyring.so 00:02:53.647 SO libspdk_trace.so.10.0 00:02:53.647 SYMLINK libspdk_trace.so 00:02:53.906 CC lib/sock/sock.o 00:02:53.906 CC lib/sock/sock_rpc.o 00:02:53.906 CC lib/thread/thread.o 00:02:53.906 CC lib/thread/iobuf.o 00:02:54.474 LIB libspdk_sock.a 00:02:54.474 SO libspdk_sock.so.10.0 00:02:54.733 SYMLINK libspdk_sock.so 00:02:54.991 CC lib/nvme/nvme_ctrlr.o 00:02:54.991 CC lib/nvme/nvme_fabric.o 00:02:54.991 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.991 CC lib/nvme/nvme_ns_cmd.o 00:02:54.991 CC lib/nvme/nvme_ns.o 00:02:54.991 CC lib/nvme/nvme_pcie.o 00:02:54.991 CC lib/nvme/nvme_qpair.o 00:02:54.991 CC lib/nvme/nvme_pcie_common.o 00:02:54.991 CC lib/nvme/nvme.o 00:02:55.925 CC lib/nvme/nvme_quirks.o 00:02:55.925 CC lib/nvme/nvme_transport.o 00:02:55.925 CC lib/nvme/nvme_discovery.o 00:02:55.925 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.925 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.925 CC lib/nvme/nvme_tcp.o 00:02:56.183 CC lib/nvme/nvme_opal.o 00:02:56.183 LIB libspdk_thread.a 00:02:56.183 SO libspdk_thread.so.10.1 00:02:56.183 SYMLINK libspdk_thread.so 00:02:56.183 CC lib/nvme/nvme_io_msg.o 00:02:56.183 CC lib/nvme/nvme_poll_group.o 00:02:56.441 CC lib/nvme/nvme_zns.o 00:02:56.441 CC lib/nvme/nvme_stubs.o 00:02:56.699 CC lib/nvme/nvme_auth.o 00:02:56.699 CC lib/nvme/nvme_cuse.o 00:02:56.699 CC lib/accel/accel.o 00:02:56.957 CC lib/accel/accel_rpc.o 00:02:56.957 CC lib/accel/accel_sw.o 00:02:56.957 CC lib/nvme/nvme_rdma.o 00:02:57.215 CC lib/blob/blobstore.o 00:02:57.215 CC lib/init/json_config.o 00:02:57.215 CC lib/virtio/virtio.o 00:02:57.215 CC lib/virtio/virtio_vhost_user.o 00:02:57.473 CC lib/init/subsystem.o 00:02:57.731 CC lib/blob/request.o 00:02:57.731 CC lib/virtio/virtio_vfio_user.o 00:02:57.731 CC lib/virtio/virtio_pci.o 00:02:57.731 CC lib/blob/zeroes.o 00:02:57.731 CC lib/blob/blob_bs_dev.o 00:02:57.731 CC lib/init/subsystem_rpc.o 00:02:57.989 CC lib/init/rpc.o 00:02:57.989 LIB libspdk_accel.a 00:02:57.989 SO libspdk_accel.so.16.0 00:02:57.989 LIB libspdk_virtio.a 00:02:57.989 LIB libspdk_init.a 00:02:58.247 SO libspdk_init.so.5.0 00:02:58.247 SO libspdk_virtio.so.7.0 00:02:58.247 SYMLINK libspdk_accel.so 00:02:58.247 SYMLINK libspdk_init.so 00:02:58.247 SYMLINK libspdk_virtio.so 00:02:58.533 CC lib/bdev/bdev.o 00:02:58.533 CC lib/bdev/bdev_zone.o 00:02:58.533 CC lib/bdev/bdev_rpc.o 00:02:58.533 CC lib/bdev/part.o 00:02:58.533 CC lib/bdev/scsi_nvme.o 00:02:58.533 CC lib/event/app.o 00:02:58.533 CC lib/event/log_rpc.o 00:02:58.533 CC lib/event/reactor.o 00:02:58.533 CC lib/event/app_rpc.o 00:02:58.533 CC lib/event/scheduler_static.o 00:02:58.791 LIB libspdk_nvme.a 00:02:59.048 SO libspdk_nvme.so.13.1 00:02:59.306 LIB libspdk_event.a 00:02:59.306 SO libspdk_event.so.14.0 00:02:59.306 SYMLINK libspdk_event.so 00:02:59.306 SYMLINK libspdk_nvme.so 00:03:01.835 LIB libspdk_blob.a 00:03:01.835 SO libspdk_blob.so.11.0 00:03:01.835 SYMLINK libspdk_blob.so 00:03:02.093 CC lib/blobfs/blobfs.o 00:03:02.093 CC lib/lvol/lvol.o 00:03:02.093 CC lib/blobfs/tree.o 00:03:02.093 LIB libspdk_bdev.a 00:03:02.093 SO libspdk_bdev.so.16.0 00:03:02.351 SYMLINK libspdk_bdev.so 00:03:02.351 CC lib/ublk/ublk.o 00:03:02.351 CC lib/ublk/ublk_rpc.o 00:03:02.351 CC lib/nvmf/ctrlr.o 00:03:02.351 CC lib/nvmf/ctrlr_discovery.o 00:03:02.351 CC lib/scsi/dev.o 00:03:02.351 CC lib/scsi/lun.o 00:03:02.351 CC lib/ftl/ftl_core.o 00:03:02.351 CC lib/nbd/nbd.o 00:03:02.609 CC lib/ftl/ftl_init.o 00:03:02.609 CC lib/ftl/ftl_layout.o 00:03:02.867 CC lib/scsi/port.o 00:03:02.867 CC lib/scsi/scsi.o 00:03:02.867 CC lib/nbd/nbd_rpc.o 00:03:02.867 CC lib/scsi/scsi_bdev.o 00:03:03.125 CC lib/scsi/scsi_pr.o 00:03:03.125 CC lib/nvmf/ctrlr_bdev.o 00:03:03.125 CC lib/nvmf/subsystem.o 00:03:03.125 CC lib/ftl/ftl_debug.o 00:03:03.125 LIB libspdk_blobfs.a 00:03:03.125 LIB libspdk_nbd.a 00:03:03.125 SO libspdk_blobfs.so.10.0 00:03:03.125 SO libspdk_nbd.so.7.0 00:03:03.125 LIB libspdk_lvol.a 00:03:03.125 SYMLINK libspdk_blobfs.so 00:03:03.384 CC lib/scsi/scsi_rpc.o 00:03:03.384 SO libspdk_lvol.so.10.0 00:03:03.384 SYMLINK libspdk_nbd.so 00:03:03.384 CC lib/scsi/task.o 00:03:03.384 LIB libspdk_ublk.a 00:03:03.384 SO libspdk_ublk.so.3.0 00:03:03.384 SYMLINK libspdk_lvol.so 00:03:03.384 CC lib/nvmf/nvmf.o 00:03:03.384 CC lib/ftl/ftl_io.o 00:03:03.384 SYMLINK libspdk_ublk.so 00:03:03.384 CC lib/nvmf/nvmf_rpc.o 00:03:03.384 CC lib/nvmf/transport.o 00:03:03.384 CC lib/nvmf/tcp.o 00:03:03.642 CC lib/nvmf/stubs.o 00:03:03.642 LIB libspdk_scsi.a 00:03:03.642 CC lib/ftl/ftl_sb.o 00:03:03.642 SO libspdk_scsi.so.9.0 00:03:03.900 SYMLINK libspdk_scsi.so 00:03:03.900 CC lib/ftl/ftl_l2p.o 00:03:03.900 CC lib/ftl/ftl_l2p_flat.o 00:03:03.900 CC lib/nvmf/mdns_server.o 00:03:04.157 CC lib/nvmf/rdma.o 00:03:04.157 CC lib/nvmf/auth.o 00:03:04.157 CC lib/ftl/ftl_nv_cache.o 00:03:04.157 CC lib/ftl/ftl_band.o 00:03:04.722 CC lib/ftl/ftl_band_ops.o 00:03:04.722 CC lib/iscsi/conn.o 00:03:04.722 CC lib/vhost/vhost.o 00:03:04.722 CC lib/vhost/vhost_rpc.o 00:03:04.722 CC lib/ftl/ftl_writer.o 00:03:04.722 CC lib/iscsi/init_grp.o 00:03:04.980 CC lib/iscsi/iscsi.o 00:03:04.980 CC lib/iscsi/md5.o 00:03:04.980 CC lib/iscsi/param.o 00:03:05.239 CC lib/iscsi/portal_grp.o 00:03:05.239 CC lib/iscsi/tgt_node.o 00:03:05.497 CC lib/vhost/vhost_scsi.o 00:03:05.497 CC lib/ftl/ftl_rq.o 00:03:05.497 CC lib/vhost/vhost_blk.o 00:03:05.497 CC lib/vhost/rte_vhost_user.o 00:03:05.497 CC lib/iscsi/iscsi_subsystem.o 00:03:05.497 CC lib/iscsi/iscsi_rpc.o 00:03:05.497 CC lib/iscsi/task.o 00:03:05.756 CC lib/ftl/ftl_reloc.o 00:03:05.756 CC lib/ftl/ftl_l2p_cache.o 00:03:05.756 CC lib/ftl/ftl_p2l.o 00:03:06.014 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.014 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.014 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.272 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.272 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.272 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.272 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.530 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.530 CC lib/ftl/utils/ftl_conf.o 00:03:06.788 CC lib/ftl/utils/ftl_md.o 00:03:06.788 LIB libspdk_vhost.a 00:03:06.788 LIB libspdk_iscsi.a 00:03:06.788 CC lib/ftl/utils/ftl_mempool.o 00:03:06.788 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.788 CC lib/ftl/utils/ftl_property.o 00:03:06.788 SO libspdk_vhost.so.8.0 00:03:06.788 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.788 SO libspdk_iscsi.so.8.0 00:03:06.788 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.047 LIB libspdk_nvmf.a 00:03:07.047 SYMLINK libspdk_vhost.so 00:03:07.047 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.047 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.047 SYMLINK libspdk_iscsi.so 00:03:07.047 SO libspdk_nvmf.so.19.0 00:03:07.047 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.047 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:07.306 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.306 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.306 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.306 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.306 CC lib/ftl/base/ftl_base_dev.o 00:03:07.306 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.306 CC lib/ftl/ftl_trace.o 00:03:07.306 SYMLINK libspdk_nvmf.so 00:03:07.564 LIB libspdk_ftl.a 00:03:07.823 SO libspdk_ftl.so.9.0 00:03:08.081 SYMLINK libspdk_ftl.so 00:03:08.648 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.648 CC module/accel/dsa/accel_dsa.o 00:03:08.648 CC module/keyring/linux/keyring.o 00:03:08.648 CC module/keyring/file/keyring.o 00:03:08.648 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.648 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.648 CC module/blob/bdev/blob_bdev.o 00:03:08.648 CC module/accel/error/accel_error.o 00:03:08.648 CC module/accel/ioat/accel_ioat.o 00:03:08.648 CC module/sock/posix/posix.o 00:03:08.648 LIB libspdk_env_dpdk_rpc.a 00:03:08.648 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.648 CC module/keyring/linux/keyring_rpc.o 00:03:08.648 CC module/keyring/file/keyring_rpc.o 00:03:08.648 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.907 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.907 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.907 CC module/accel/error/accel_error_rpc.o 00:03:08.907 LIB libspdk_scheduler_dynamic.a 00:03:08.907 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.907 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.907 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.907 LIB libspdk_keyring_linux.a 00:03:08.907 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.907 LIB libspdk_keyring_file.a 00:03:08.907 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.907 LIB libspdk_blob_bdev.a 00:03:08.907 SO libspdk_keyring_linux.so.1.0 00:03:08.907 CC module/accel/iaa/accel_iaa.o 00:03:08.907 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.907 SO libspdk_keyring_file.so.1.0 00:03:08.907 SO libspdk_blob_bdev.so.11.0 00:03:08.907 LIB libspdk_accel_error.a 00:03:08.907 LIB libspdk_accel_ioat.a 00:03:08.907 SO libspdk_accel_error.so.2.0 00:03:08.907 SYMLINK libspdk_keyring_linux.so 00:03:09.165 SO libspdk_accel_ioat.so.6.0 00:03:09.165 SYMLINK libspdk_keyring_file.so 00:03:09.165 SYMLINK libspdk_blob_bdev.so 00:03:09.165 LIB libspdk_accel_dsa.a 00:03:09.165 SYMLINK libspdk_accel_error.so 00:03:09.165 SO libspdk_accel_dsa.so.5.0 00:03:09.165 SYMLINK libspdk_accel_ioat.so 00:03:09.165 CC module/scheduler/gscheduler/gscheduler.o 00:03:09.165 SYMLINK libspdk_accel_dsa.so 00:03:09.165 LIB libspdk_accel_iaa.a 00:03:09.165 SO libspdk_accel_iaa.so.3.0 00:03:09.165 LIB libspdk_scheduler_gscheduler.a 00:03:09.424 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.424 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.424 CC module/bdev/delay/vbdev_delay.o 00:03:09.424 SYMLINK libspdk_accel_iaa.so 00:03:09.424 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.424 CC module/bdev/gpt/gpt.o 00:03:09.424 CC module/bdev/error/vbdev_error.o 00:03:09.424 CC module/bdev/null/bdev_null.o 00:03:09.424 CC module/bdev/malloc/bdev_malloc.o 00:03:09.424 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.424 SYMLINK libspdk_scheduler_gscheduler.so 00:03:09.424 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.424 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.424 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.683 LIB libspdk_sock_posix.a 00:03:09.683 SO libspdk_sock_posix.so.6.0 00:03:09.683 CC module/bdev/null/bdev_null_rpc.o 00:03:09.683 SYMLINK libspdk_sock_posix.so 00:03:09.683 LIB libspdk_bdev_gpt.a 00:03:09.683 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.683 LIB libspdk_blobfs_bdev.a 00:03:09.683 LIB libspdk_bdev_error.a 00:03:09.683 SO libspdk_bdev_gpt.so.6.0 00:03:09.683 SO libspdk_blobfs_bdev.so.6.0 00:03:09.683 SO libspdk_bdev_error.so.6.0 00:03:09.683 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.941 SYMLINK libspdk_bdev_gpt.so 00:03:09.941 SYMLINK libspdk_bdev_error.so 00:03:09.941 SYMLINK libspdk_blobfs_bdev.so 00:03:09.941 LIB libspdk_bdev_null.a 00:03:09.941 SO libspdk_bdev_null.so.6.0 00:03:09.941 CC module/bdev/nvme/bdev_nvme.o 00:03:09.941 LIB libspdk_bdev_lvol.a 00:03:09.941 SYMLINK libspdk_bdev_null.so 00:03:09.941 LIB libspdk_bdev_delay.a 00:03:09.941 CC module/bdev/raid/bdev_raid.o 00:03:09.941 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.941 LIB libspdk_bdev_malloc.a 00:03:09.941 CC module/bdev/split/vbdev_split.o 00:03:09.941 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.941 SO libspdk_bdev_delay.so.6.0 00:03:09.941 SO libspdk_bdev_lvol.so.6.0 00:03:09.941 CC module/bdev/xnvme/bdev_xnvme.o 00:03:09.941 SO libspdk_bdev_malloc.so.6.0 00:03:10.199 SYMLINK libspdk_bdev_delay.so 00:03:10.199 SYMLINK libspdk_bdev_lvol.so 00:03:10.199 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.199 SYMLINK libspdk_bdev_malloc.so 00:03:10.199 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:10.199 CC module/bdev/aio/bdev_aio.o 00:03:10.199 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.199 CC module/bdev/ftl/bdev_ftl.o 00:03:10.199 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.199 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.458 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.458 LIB libspdk_bdev_xnvme.a 00:03:10.458 LIB libspdk_bdev_zone_block.a 00:03:10.458 SO libspdk_bdev_xnvme.so.3.0 00:03:10.458 SO libspdk_bdev_zone_block.so.6.0 00:03:10.458 LIB libspdk_bdev_split.a 00:03:10.458 CC module/bdev/nvme/nvme_rpc.o 00:03:10.458 SYMLINK libspdk_bdev_xnvme.so 00:03:10.458 SO libspdk_bdev_split.so.6.0 00:03:10.458 SYMLINK libspdk_bdev_zone_block.so 00:03:10.458 LIB libspdk_bdev_passthru.a 00:03:10.716 LIB libspdk_bdev_aio.a 00:03:10.716 SO libspdk_bdev_passthru.so.6.0 00:03:10.716 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.716 SO libspdk_bdev_aio.so.6.0 00:03:10.716 SYMLINK libspdk_bdev_split.so 00:03:10.716 SYMLINK libspdk_bdev_passthru.so 00:03:10.716 CC module/bdev/raid/bdev_raid_rpc.o 00:03:10.716 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.716 SYMLINK libspdk_bdev_aio.so 00:03:10.716 CC module/bdev/raid/bdev_raid_sb.o 00:03:10.716 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.716 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.716 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.974 CC module/bdev/raid/raid0.o 00:03:10.974 LIB libspdk_bdev_ftl.a 00:03:10.974 SO libspdk_bdev_ftl.so.6.0 00:03:10.974 CC module/bdev/nvme/vbdev_opal.o 00:03:10.974 SYMLINK libspdk_bdev_ftl.so 00:03:10.974 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.974 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.233 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.233 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.233 CC module/bdev/raid/raid1.o 00:03:11.233 CC module/bdev/raid/concat.o 00:03:11.233 LIB libspdk_bdev_iscsi.a 00:03:11.233 SO libspdk_bdev_iscsi.so.6.0 00:03:11.491 SYMLINK libspdk_bdev_iscsi.so 00:03:11.491 LIB libspdk_bdev_virtio.a 00:03:11.491 LIB libspdk_bdev_raid.a 00:03:11.491 SO libspdk_bdev_virtio.so.6.0 00:03:11.491 SO libspdk_bdev_raid.so.6.0 00:03:11.749 SYMLINK libspdk_bdev_virtio.so 00:03:11.749 SYMLINK libspdk_bdev_raid.so 00:03:13.124 LIB libspdk_bdev_nvme.a 00:03:13.124 SO libspdk_bdev_nvme.so.7.0 00:03:13.124 SYMLINK libspdk_bdev_nvme.so 00:03:13.691 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.691 CC module/event/subsystems/vmd/vmd.o 00:03:13.691 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.691 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.691 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.691 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.691 CC module/event/subsystems/sock/sock.o 00:03:13.691 CC module/event/subsystems/keyring/keyring.o 00:03:13.691 LIB libspdk_event_vmd.a 00:03:13.691 LIB libspdk_event_keyring.a 00:03:13.691 LIB libspdk_event_vhost_blk.a 00:03:13.691 LIB libspdk_event_scheduler.a 00:03:13.691 LIB libspdk_event_sock.a 00:03:13.691 SO libspdk_event_vmd.so.6.0 00:03:13.972 SO libspdk_event_keyring.so.1.0 00:03:13.972 LIB libspdk_event_iobuf.a 00:03:13.972 SO libspdk_event_vhost_blk.so.3.0 00:03:13.972 SO libspdk_event_scheduler.so.4.0 00:03:13.973 SO libspdk_event_sock.so.5.0 00:03:13.973 SO libspdk_event_iobuf.so.3.0 00:03:13.973 SYMLINK libspdk_event_vmd.so 00:03:13.973 SYMLINK libspdk_event_keyring.so 00:03:13.973 SYMLINK libspdk_event_vhost_blk.so 00:03:13.973 SYMLINK libspdk_event_scheduler.so 00:03:13.973 SYMLINK libspdk_event_sock.so 00:03:13.973 SYMLINK libspdk_event_iobuf.so 00:03:14.236 CC module/event/subsystems/accel/accel.o 00:03:14.494 LIB libspdk_event_accel.a 00:03:14.494 SO libspdk_event_accel.so.6.0 00:03:14.494 SYMLINK libspdk_event_accel.so 00:03:14.752 CC module/event/subsystems/bdev/bdev.o 00:03:15.011 LIB libspdk_event_bdev.a 00:03:15.011 SO libspdk_event_bdev.so.6.0 00:03:15.011 SYMLINK libspdk_event_bdev.so 00:03:15.269 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:15.269 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:15.269 CC module/event/subsystems/scsi/scsi.o 00:03:15.269 CC module/event/subsystems/nbd/nbd.o 00:03:15.269 CC module/event/subsystems/ublk/ublk.o 00:03:15.527 LIB libspdk_event_nbd.a 00:03:15.528 LIB libspdk_event_ublk.a 00:03:15.528 LIB libspdk_event_scsi.a 00:03:15.528 SO libspdk_event_nbd.so.6.0 00:03:15.528 SO libspdk_event_scsi.so.6.0 00:03:15.528 SO libspdk_event_ublk.so.3.0 00:03:15.528 SYMLINK libspdk_event_ublk.so 00:03:15.528 SYMLINK libspdk_event_scsi.so 00:03:15.528 SYMLINK libspdk_event_nbd.so 00:03:15.528 LIB libspdk_event_nvmf.a 00:03:15.786 SO libspdk_event_nvmf.so.6.0 00:03:15.786 SYMLINK libspdk_event_nvmf.so 00:03:15.786 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.786 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.044 LIB libspdk_event_vhost_scsi.a 00:03:16.044 LIB libspdk_event_iscsi.a 00:03:16.044 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.044 SO libspdk_event_iscsi.so.6.0 00:03:16.044 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.044 SYMLINK libspdk_event_iscsi.so 00:03:16.302 SO libspdk.so.6.0 00:03:16.302 SYMLINK libspdk.so 00:03:16.560 TEST_HEADER include/spdk/accel.h 00:03:16.561 TEST_HEADER include/spdk/accel_module.h 00:03:16.561 TEST_HEADER include/spdk/assert.h 00:03:16.561 TEST_HEADER include/spdk/barrier.h 00:03:16.561 TEST_HEADER include/spdk/base64.h 00:03:16.561 TEST_HEADER include/spdk/bdev.h 00:03:16.561 CXX app/trace/trace.o 00:03:16.561 TEST_HEADER include/spdk/bdev_module.h 00:03:16.561 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.561 TEST_HEADER include/spdk/bit_array.h 00:03:16.561 TEST_HEADER include/spdk/bit_pool.h 00:03:16.561 CC app/trace_record/trace_record.o 00:03:16.561 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.561 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.561 TEST_HEADER include/spdk/blobfs.h 00:03:16.561 TEST_HEADER include/spdk/blob.h 00:03:16.561 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.561 TEST_HEADER include/spdk/conf.h 00:03:16.561 TEST_HEADER include/spdk/config.h 00:03:16.561 TEST_HEADER include/spdk/cpuset.h 00:03:16.561 TEST_HEADER include/spdk/crc16.h 00:03:16.561 TEST_HEADER include/spdk/crc32.h 00:03:16.561 TEST_HEADER include/spdk/crc64.h 00:03:16.561 TEST_HEADER include/spdk/dif.h 00:03:16.561 TEST_HEADER include/spdk/dma.h 00:03:16.561 TEST_HEADER include/spdk/endian.h 00:03:16.561 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.561 TEST_HEADER include/spdk/env.h 00:03:16.561 TEST_HEADER include/spdk/event.h 00:03:16.561 TEST_HEADER include/spdk/fd_group.h 00:03:16.561 CC app/nvmf_tgt/nvmf_main.o 00:03:16.561 TEST_HEADER include/spdk/fd.h 00:03:16.561 TEST_HEADER include/spdk/file.h 00:03:16.561 TEST_HEADER include/spdk/ftl.h 00:03:16.561 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.561 TEST_HEADER include/spdk/hexlify.h 00:03:16.561 TEST_HEADER include/spdk/histogram_data.h 00:03:16.561 TEST_HEADER include/spdk/idxd.h 00:03:16.561 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.561 TEST_HEADER include/spdk/init.h 00:03:16.561 TEST_HEADER include/spdk/ioat.h 00:03:16.561 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.561 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.561 TEST_HEADER include/spdk/json.h 00:03:16.561 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.561 TEST_HEADER include/spdk/keyring.h 00:03:16.561 CC examples/util/zipf/zipf.o 00:03:16.561 TEST_HEADER include/spdk/keyring_module.h 00:03:16.561 CC test/thread/poller_perf/poller_perf.o 00:03:16.561 TEST_HEADER include/spdk/likely.h 00:03:16.561 TEST_HEADER include/spdk/log.h 00:03:16.561 TEST_HEADER include/spdk/lvol.h 00:03:16.561 TEST_HEADER include/spdk/memory.h 00:03:16.561 TEST_HEADER include/spdk/mmio.h 00:03:16.561 TEST_HEADER include/spdk/nbd.h 00:03:16.561 TEST_HEADER include/spdk/net.h 00:03:16.561 CC examples/ioat/perf/perf.o 00:03:16.820 TEST_HEADER include/spdk/notify.h 00:03:16.820 TEST_HEADER include/spdk/nvme.h 00:03:16.820 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.820 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.820 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.820 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.820 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.820 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.820 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.820 TEST_HEADER include/spdk/nvmf.h 00:03:16.820 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.820 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.820 TEST_HEADER include/spdk/opal.h 00:03:16.820 TEST_HEADER include/spdk/opal_spec.h 00:03:16.820 TEST_HEADER include/spdk/pci_ids.h 00:03:16.820 TEST_HEADER include/spdk/pipe.h 00:03:16.820 TEST_HEADER include/spdk/queue.h 00:03:16.820 TEST_HEADER include/spdk/reduce.h 00:03:16.820 TEST_HEADER include/spdk/rpc.h 00:03:16.820 TEST_HEADER include/spdk/scheduler.h 00:03:16.820 TEST_HEADER include/spdk/scsi.h 00:03:16.820 CC test/dma/test_dma/test_dma.o 00:03:16.820 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.820 TEST_HEADER include/spdk/sock.h 00:03:16.820 TEST_HEADER include/spdk/stdinc.h 00:03:16.820 TEST_HEADER include/spdk/string.h 00:03:16.820 TEST_HEADER include/spdk/thread.h 00:03:16.820 TEST_HEADER include/spdk/trace.h 00:03:16.820 TEST_HEADER include/spdk/trace_parser.h 00:03:16.820 TEST_HEADER include/spdk/tree.h 00:03:16.820 TEST_HEADER include/spdk/ublk.h 00:03:16.820 TEST_HEADER include/spdk/util.h 00:03:16.820 TEST_HEADER include/spdk/uuid.h 00:03:16.820 CC test/app/bdev_svc/bdev_svc.o 00:03:16.820 TEST_HEADER include/spdk/version.h 00:03:16.820 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.820 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.820 TEST_HEADER include/spdk/vhost.h 00:03:16.820 TEST_HEADER include/spdk/vmd.h 00:03:16.820 TEST_HEADER include/spdk/xor.h 00:03:16.820 TEST_HEADER include/spdk/zipf.h 00:03:16.820 CXX test/cpp_headers/accel.o 00:03:16.820 LINK interrupt_tgt 00:03:16.820 LINK poller_perf 00:03:16.820 LINK nvmf_tgt 00:03:16.820 LINK zipf 00:03:16.820 LINK spdk_trace_record 00:03:17.078 LINK bdev_svc 00:03:17.078 LINK ioat_perf 00:03:17.078 CXX test/cpp_headers/accel_module.o 00:03:17.078 CXX test/cpp_headers/assert.o 00:03:17.078 LINK spdk_trace 00:03:17.078 CC examples/ioat/verify/verify.o 00:03:17.336 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.336 LINK test_dma 00:03:17.336 CXX test/cpp_headers/barrier.o 00:03:17.336 CC test/rpc_client/rpc_client_test.o 00:03:17.336 CC test/event/event_perf/event_perf.o 00:03:17.336 CC examples/thread/thread/thread_ex.o 00:03:17.336 LINK verify 00:03:17.336 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.336 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.336 CXX test/cpp_headers/base64.o 00:03:17.336 CC app/spdk_tgt/spdk_tgt.o 00:03:17.594 CXX test/cpp_headers/bdev.o 00:03:17.594 LINK iscsi_tgt 00:03:17.594 LINK event_perf 00:03:17.594 LINK rpc_client_test 00:03:17.594 LINK thread 00:03:17.594 LINK spdk_tgt 00:03:17.594 CXX test/cpp_headers/bdev_module.o 00:03:17.852 CC test/event/reactor/reactor.o 00:03:17.852 CC test/accel/dif/dif.o 00:03:17.852 CC test/env/vtophys/vtophys.o 00:03:17.852 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.852 CC test/blobfs/mkfs/mkfs.o 00:03:17.853 LINK reactor 00:03:17.853 CXX test/cpp_headers/bdev_zone.o 00:03:17.853 LINK nvme_fuzz 00:03:17.853 LINK vtophys 00:03:18.111 CC app/spdk_lspci/spdk_lspci.o 00:03:18.111 LINK mem_callbacks 00:03:18.111 LINK env_dpdk_post_init 00:03:18.111 CC examples/sock/hello_world/hello_sock.o 00:03:18.111 LINK mkfs 00:03:18.111 CXX test/cpp_headers/bit_array.o 00:03:18.111 CC test/event/reactor_perf/reactor_perf.o 00:03:18.111 LINK spdk_lspci 00:03:18.111 CXX test/cpp_headers/bit_pool.o 00:03:18.111 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:18.111 CC test/env/memory/memory_ut.o 00:03:18.369 CXX test/cpp_headers/blob_bdev.o 00:03:18.369 LINK reactor_perf 00:03:18.369 LINK dif 00:03:18.369 CC test/env/pci/pci_ut.o 00:03:18.369 LINK hello_sock 00:03:18.369 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.369 CC app/spdk_nvme_perf/perf.o 00:03:18.369 CC app/spdk_nvme_identify/identify.o 00:03:18.369 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.627 CC test/event/app_repeat/app_repeat.o 00:03:18.627 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:18.627 CXX test/cpp_headers/blobfs.o 00:03:18.627 LINK app_repeat 00:03:18.886 CXX test/cpp_headers/blob.o 00:03:18.886 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.886 CC examples/idxd/perf/perf.o 00:03:18.886 LINK pci_ut 00:03:18.886 CXX test/cpp_headers/conf.o 00:03:18.886 LINK lsvmd 00:03:18.886 CC test/event/scheduler/scheduler.o 00:03:19.143 LINK vhost_fuzz 00:03:19.143 CXX test/cpp_headers/config.o 00:03:19.143 CXX test/cpp_headers/cpuset.o 00:03:19.143 CC examples/vmd/led/led.o 00:03:19.143 LINK idxd_perf 00:03:19.401 LINK scheduler 00:03:19.401 CC examples/accel/perf/accel_perf.o 00:03:19.401 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.401 CXX test/cpp_headers/crc16.o 00:03:19.401 CXX test/cpp_headers/crc32.o 00:03:19.401 LINK led 00:03:19.401 LINK spdk_nvme_perf 00:03:19.401 LINK spdk_nvme_identify 00:03:19.659 LINK memory_ut 00:03:19.659 CXX test/cpp_headers/crc64.o 00:03:19.659 LINK spdk_nvme_discover 00:03:19.659 CC app/spdk_top/spdk_top.o 00:03:19.659 CC test/app/histogram_perf/histogram_perf.o 00:03:19.659 CC test/app/jsoncat/jsoncat.o 00:03:19.659 CXX test/cpp_headers/dif.o 00:03:19.916 CC test/lvol/esnap/esnap.o 00:03:19.916 LINK histogram_perf 00:03:19.916 CC test/nvme/aer/aer.o 00:03:19.916 CC test/nvme/reset/reset.o 00:03:19.916 LINK jsoncat 00:03:19.916 LINK accel_perf 00:03:19.917 CXX test/cpp_headers/dma.o 00:03:19.917 CC test/bdev/bdevio/bdevio.o 00:03:20.174 CC test/app/stub/stub.o 00:03:20.174 CXX test/cpp_headers/endian.o 00:03:20.174 LINK reset 00:03:20.174 CC app/vhost/vhost.o 00:03:20.174 LINK aer 00:03:20.174 LINK stub 00:03:20.174 CXX test/cpp_headers/env_dpdk.o 00:03:20.174 CC examples/blob/hello_world/hello_blob.o 00:03:20.440 CXX test/cpp_headers/env.o 00:03:20.440 LINK vhost 00:03:20.440 LINK iscsi_fuzz 00:03:20.440 LINK bdevio 00:03:20.440 CC test/nvme/sgl/sgl.o 00:03:20.440 CXX test/cpp_headers/event.o 00:03:20.440 CC test/nvme/e2edp/nvme_dp.o 00:03:20.440 LINK hello_blob 00:03:20.723 CC examples/nvme/hello_world/hello_world.o 00:03:20.723 CC examples/nvme/reconnect/reconnect.o 00:03:20.723 CXX test/cpp_headers/fd_group.o 00:03:20.723 LINK sgl 00:03:20.723 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.723 CC examples/nvme/arbitration/arbitration.o 00:03:20.723 LINK spdk_top 00:03:20.981 LINK nvme_dp 00:03:20.981 CC examples/blob/cli/blobcli.o 00:03:20.981 CXX test/cpp_headers/fd.o 00:03:20.981 LINK hello_world 00:03:20.981 CXX test/cpp_headers/file.o 00:03:20.981 CC app/spdk_dd/spdk_dd.o 00:03:21.240 CXX test/cpp_headers/ftl.o 00:03:21.240 CC test/nvme/overhead/overhead.o 00:03:21.240 LINK reconnect 00:03:21.240 CC examples/nvme/hotplug/hotplug.o 00:03:21.240 LINK arbitration 00:03:21.240 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.240 CXX test/cpp_headers/gpt_spec.o 00:03:21.498 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.498 LINK nvme_manage 00:03:21.498 LINK hotplug 00:03:21.498 CC examples/nvme/abort/abort.o 00:03:21.498 LINK overhead 00:03:21.498 LINK blobcli 00:03:21.498 CXX test/cpp_headers/hexlify.o 00:03:21.498 LINK spdk_dd 00:03:21.498 LINK hello_bdev 00:03:21.498 LINK cmb_copy 00:03:21.757 CXX test/cpp_headers/histogram_data.o 00:03:21.757 CC test/nvme/startup/startup.o 00:03:21.757 CC test/nvme/err_injection/err_injection.o 00:03:21.757 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.757 CC test/nvme/reserve/reserve.o 00:03:21.757 CC test/nvme/simple_copy/simple_copy.o 00:03:21.757 CXX test/cpp_headers/idxd.o 00:03:22.016 CC app/fio/nvme/fio_plugin.o 00:03:22.016 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.016 LINK abort 00:03:22.016 LINK startup 00:03:22.016 LINK pmr_persistence 00:03:22.016 LINK err_injection 00:03:22.016 CXX test/cpp_headers/idxd_spec.o 00:03:22.016 LINK reserve 00:03:22.016 LINK simple_copy 00:03:22.274 CC test/nvme/connect_stress/connect_stress.o 00:03:22.274 CC test/nvme/compliance/nvme_compliance.o 00:03:22.274 CC test/nvme/boot_partition/boot_partition.o 00:03:22.274 CC app/fio/bdev/fio_plugin.o 00:03:22.274 CXX test/cpp_headers/init.o 00:03:22.274 LINK boot_partition 00:03:22.274 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.274 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.274 LINK connect_stress 00:03:22.532 CXX test/cpp_headers/ioat.o 00:03:22.532 LINK spdk_nvme 00:03:22.532 LINK fused_ordering 00:03:22.532 LINK nvme_compliance 00:03:22.532 LINK doorbell_aers 00:03:22.532 CXX test/cpp_headers/ioat_spec.o 00:03:22.532 CC test/nvme/fdp/fdp.o 00:03:22.532 CC test/nvme/cuse/cuse.o 00:03:22.791 CXX test/cpp_headers/iscsi_spec.o 00:03:22.791 CXX test/cpp_headers/json.o 00:03:22.791 LINK spdk_bdev 00:03:22.791 CXX test/cpp_headers/jsonrpc.o 00:03:22.791 CXX test/cpp_headers/keyring.o 00:03:22.791 CXX test/cpp_headers/keyring_module.o 00:03:22.791 LINK bdevperf 00:03:22.791 CXX test/cpp_headers/likely.o 00:03:22.791 CXX test/cpp_headers/log.o 00:03:23.050 CXX test/cpp_headers/lvol.o 00:03:23.050 CXX test/cpp_headers/memory.o 00:03:23.050 CXX test/cpp_headers/mmio.o 00:03:23.050 CXX test/cpp_headers/nbd.o 00:03:23.050 LINK fdp 00:03:23.050 CXX test/cpp_headers/net.o 00:03:23.050 CXX test/cpp_headers/notify.o 00:03:23.050 CXX test/cpp_headers/nvme.o 00:03:23.050 CXX test/cpp_headers/nvme_intel.o 00:03:23.050 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.308 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.308 CXX test/cpp_headers/nvme_spec.o 00:03:23.308 CXX test/cpp_headers/nvme_zns.o 00:03:23.308 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.308 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.308 CXX test/cpp_headers/nvmf.o 00:03:23.308 CC examples/nvmf/nvmf/nvmf.o 00:03:23.308 CXX test/cpp_headers/nvmf_spec.o 00:03:23.308 CXX test/cpp_headers/nvmf_transport.o 00:03:23.566 CXX test/cpp_headers/opal.o 00:03:23.566 CXX test/cpp_headers/opal_spec.o 00:03:23.566 CXX test/cpp_headers/pci_ids.o 00:03:23.566 CXX test/cpp_headers/pipe.o 00:03:23.566 CXX test/cpp_headers/queue.o 00:03:23.566 CXX test/cpp_headers/reduce.o 00:03:23.566 CXX test/cpp_headers/rpc.o 00:03:23.566 CXX test/cpp_headers/scheduler.o 00:03:23.566 CXX test/cpp_headers/scsi.o 00:03:23.566 CXX test/cpp_headers/scsi_spec.o 00:03:23.566 CXX test/cpp_headers/sock.o 00:03:23.566 CXX test/cpp_headers/stdinc.o 00:03:23.824 LINK nvmf 00:03:23.824 CXX test/cpp_headers/string.o 00:03:23.824 CXX test/cpp_headers/thread.o 00:03:23.824 CXX test/cpp_headers/trace.o 00:03:23.824 CXX test/cpp_headers/trace_parser.o 00:03:23.824 CXX test/cpp_headers/tree.o 00:03:23.824 CXX test/cpp_headers/ublk.o 00:03:23.824 CXX test/cpp_headers/util.o 00:03:23.824 CXX test/cpp_headers/uuid.o 00:03:23.824 CXX test/cpp_headers/version.o 00:03:23.824 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.824 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.824 CXX test/cpp_headers/vhost.o 00:03:24.082 CXX test/cpp_headers/vmd.o 00:03:24.082 CXX test/cpp_headers/xor.o 00:03:24.082 CXX test/cpp_headers/zipf.o 00:03:24.082 LINK cuse 00:03:26.614 LINK esnap 00:03:26.614 00:03:26.614 real 1m17.483s 00:03:26.614 user 7m40.535s 00:03:26.614 sys 1m33.630s 00:03:26.614 04:52:41 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:26.614 04:52:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:26.614 ************************************ 00:03:26.614 END TEST make 00:03:26.614 ************************************ 00:03:26.614 04:52:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:26.614 04:52:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:26.614 04:52:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:26.614 04:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.614 04:52:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:26.614 04:52:41 -- pm/common@44 -- $ pid=5247 00:03:26.614 04:52:41 -- pm/common@50 -- $ kill -TERM 5247 00:03:26.614 04:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.614 04:52:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:26.614 04:52:41 -- pm/common@44 -- $ pid=5248 00:03:26.614 04:52:41 -- pm/common@50 -- $ kill -TERM 5248 00:03:26.872 04:52:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:26.872 04:52:41 -- nvmf/common.sh@7 -- # uname -s 00:03:26.872 04:52:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:26.872 04:52:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:26.872 04:52:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:26.872 04:52:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:26.872 04:52:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:26.872 04:52:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:26.872 04:52:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:26.872 04:52:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:26.872 04:52:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:26.872 04:52:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:26.872 04:52:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:03:26.872 04:52:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:03:26.872 04:52:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:26.872 04:52:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:26.872 04:52:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:26.872 04:52:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:26.872 04:52:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:26.872 04:52:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:26.872 04:52:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.872 04:52:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.873 04:52:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.873 04:52:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.873 04:52:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.873 04:52:41 -- paths/export.sh@5 -- # export PATH 00:03:26.873 04:52:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.873 04:52:41 -- nvmf/common.sh@47 -- # : 0 00:03:26.873 04:52:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:26.873 04:52:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:26.873 04:52:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:26.873 04:52:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:26.873 04:52:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:26.873 04:52:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:26.873 04:52:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:26.873 04:52:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:26.873 04:52:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:26.873 04:52:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:26.873 04:52:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:26.873 04:52:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:26.873 04:52:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:26.873 04:52:41 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:26.873 04:52:41 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:26.873 04:52:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:26.873 04:52:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.873 04:52:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:26.873 04:52:41 -- spdk/autotest.sh@48 -- # udevadm_pid=53789 00:03:26.873 04:52:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.873 04:52:41 -- pm/common@17 -- # local monitor 00:03:26.873 04:52:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.873 04:52:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:26.873 04:52:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.873 04:52:41 -- pm/common@25 -- # sleep 1 00:03:26.873 04:52:41 -- pm/common@21 -- # date +%s 00:03:26.873 04:52:41 -- pm/common@21 -- # date +%s 00:03:26.873 04:52:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721796761 00:03:26.873 04:52:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721796761 00:03:26.873 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721796761_collect-vmstat.pm.log 00:03:26.873 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721796761_collect-cpu-load.pm.log 00:03:27.808 04:52:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.808 04:52:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:27.808 04:52:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:27.808 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:03:27.808 04:52:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:27.808 04:52:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:27.808 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:03:27.808 04:52:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.808 04:52:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.808 04:52:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.808 04:52:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.808 04:52:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.808 04:52:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.808 04:52:42 -- common/autotest_common.sh@1453 -- # uname 00:03:27.808 04:52:42 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:27.808 04:52:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.808 04:52:42 -- common/autotest_common.sh@1473 -- # uname 00:03:27.808 04:52:42 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:27.808 04:52:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:27.808 04:52:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:27.808 04:52:42 -- spdk/autotest.sh@72 -- # hash lcov 00:03:27.808 04:52:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:27.808 04:52:42 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:27.808 --rc lcov_branch_coverage=1 00:03:27.808 --rc lcov_function_coverage=1 00:03:27.808 --rc genhtml_branch_coverage=1 00:03:27.808 --rc genhtml_function_coverage=1 00:03:27.808 --rc genhtml_legend=1 00:03:27.808 --rc geninfo_all_blocks=1 00:03:27.808 ' 00:03:27.808 04:52:42 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:27.808 --rc lcov_branch_coverage=1 00:03:27.808 --rc lcov_function_coverage=1 00:03:27.808 --rc genhtml_branch_coverage=1 00:03:27.808 --rc genhtml_function_coverage=1 00:03:27.808 --rc genhtml_legend=1 00:03:27.808 --rc geninfo_all_blocks=1 00:03:27.808 ' 00:03:27.808 04:52:42 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:27.809 --rc lcov_branch_coverage=1 00:03:27.809 --rc lcov_function_coverage=1 00:03:27.809 --rc genhtml_branch_coverage=1 00:03:27.809 --rc genhtml_function_coverage=1 00:03:27.809 --rc genhtml_legend=1 00:03:27.809 --rc geninfo_all_blocks=1 00:03:27.809 --no-external' 00:03:27.809 04:52:42 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:27.809 --rc lcov_branch_coverage=1 00:03:27.809 --rc lcov_function_coverage=1 00:03:27.809 --rc genhtml_branch_coverage=1 00:03:27.809 --rc genhtml_function_coverage=1 00:03:27.809 --rc genhtml_legend=1 00:03:27.809 --rc geninfo_all_blocks=1 00:03:27.809 --no-external' 00:03:27.809 04:52:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:28.067 lcov: LCOV version 1.14 00:03:28.067 04:52:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:42.944 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:52.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:52.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:52.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:52.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:52.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:52.943 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:52.944 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:52.944 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:52.945 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:52.945 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:55.479 04:53:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:55.479 04:53:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.479 04:53:09 -- common/autotest_common.sh@10 -- # set +x 00:03:55.479 04:53:09 -- spdk/autotest.sh@91 -- # rm -f 00:03:55.479 04:53:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:56.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:56.305 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:56.305 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:56.305 04:53:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:56.305 04:53:10 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:56.564 04:53:10 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:56.564 04:53:10 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:56.564 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.564 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.564 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.564 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.564 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.564 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.564 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.564 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme2n1 00:03:56.564 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.564 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.565 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:56.565 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme2n2 00:03:56.565 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.565 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:56.565 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme2n3 00:03:56.565 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.565 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:56.565 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme3c3n1 00:03:56.565 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:56.565 04:53:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:03:56.565 04:53:10 -- common/autotest_common.sh@1660 -- # local device=nvme3n1 00:03:56.565 04:53:10 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:56.565 04:53:10 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:56.565 04:53:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:56.565 04:53:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.565 04:53:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.565 04:53:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:56.565 04:53:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:56.565 04:53:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.565 No valid GPT data, bailing 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.565 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.565 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.565 1+0 records in 00:03:56.565 1+0 records out 00:03:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148805 s, 70.5 MB/s 00:03:56.565 04:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.565 04:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.565 04:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:56.565 04:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:56.565 04:53:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:56.565 No valid GPT data, bailing 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.565 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.565 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.565 1+0 records in 00:03:56.565 1+0 records out 00:03:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045729 s, 229 MB/s 00:03:56.565 04:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.565 04:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.565 04:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:56.565 04:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:56.565 04:53:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:56.565 No valid GPT data, bailing 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:56.565 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.565 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.565 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:56.565 1+0 records in 00:03:56.565 1+0 records out 00:03:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460583 s, 228 MB/s 00:03:56.565 04:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.565 04:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.565 04:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:03:56.565 04:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:03:56.565 04:53:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:56.824 No valid GPT data, bailing 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.824 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.824 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:56.824 1+0 records in 00:03:56.824 1+0 records out 00:03:56.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396115 s, 265 MB/s 00:03:56.824 04:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.824 04:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.824 04:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:03:56.824 04:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:03:56.824 04:53:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:56.824 No valid GPT data, bailing 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.824 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.824 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:56.824 1+0 records in 00:03:56.824 1+0 records out 00:03:56.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439535 s, 239 MB/s 00:03:56.824 04:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.824 04:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.824 04:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:56.824 04:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:56.824 04:53:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:56.824 No valid GPT data, bailing 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:56.824 04:53:11 -- scripts/common.sh@391 -- # pt= 00:03:56.824 04:53:11 -- scripts/common.sh@392 -- # return 1 00:03:56.824 04:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:56.824 1+0 records in 00:03:56.824 1+0 records out 00:03:56.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459374 s, 228 MB/s 00:03:56.824 04:53:11 -- spdk/autotest.sh@118 -- # sync 00:03:57.083 04:53:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:57.083 04:53:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:57.083 04:53:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.985 04:53:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:58.985 04:53:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:58.985 04:53:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:58.985 04:53:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.985 04:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.985 04:53:13 -- common/autotest_common.sh@10 -- # set +x 00:03:58.985 ************************************ 00:03:58.985 START TEST setup.sh 00:03:58.985 ************************************ 00:03:58.985 04:53:13 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:58.985 * Looking for test storage... 00:03:58.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.985 04:53:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:58.985 04:53:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:58.985 04:53:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:58.985 04:53:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.985 04:53:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.985 04:53:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.985 ************************************ 00:03:58.985 START TEST acl 00:03:58.985 ************************************ 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:58.985 * Looking for test storage... 00:03:58.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme2n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme2n2 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme2n3 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme3c3n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme3n1 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:58.985 04:53:13 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:58.985 04:53:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:58.985 04:53:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.985 04:53:13 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.361 04:53:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:00.361 04:53:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:00.361 04:53:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.361 04:53:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:00.361 04:53:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.361 04:53:14 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.620 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:00.620 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:00.620 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.188 Hugepages 00:04:01.188 node hugesize free / total 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.188 00:04:01.188 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.188 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.189 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:01.448 04:53:15 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:01.448 04:53:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.448 04:53:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.448 04:53:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 ************************************ 00:04:01.448 START TEST denied 00:04:01.448 ************************************ 00:04:01.448 04:53:15 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:01.448 04:53:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:01.448 04:53:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:01.448 04:53:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.448 04:53:15 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:01.448 04:53:15 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.826 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.826 04:53:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.390 00:04:09.390 real 0m7.113s 00:04:09.390 user 0m0.828s 00:04:09.390 sys 0m1.327s 00:04:09.390 04:53:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.390 04:53:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:09.390 ************************************ 00:04:09.390 END TEST denied 00:04:09.390 ************************************ 00:04:09.390 04:53:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:09.390 04:53:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.390 04:53:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.390 04:53:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.390 ************************************ 00:04:09.390 START TEST allowed 00:04:09.390 ************************************ 00:04:09.390 04:53:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:09.390 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:09.390 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:09.390 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:09.390 04:53:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.390 04:53:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.650 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.650 04:53:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.029 00:04:11.029 real 0m2.156s 00:04:11.029 user 0m0.996s 00:04:11.029 sys 0m1.154s 00:04:11.029 04:53:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.029 04:53:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:11.029 ************************************ 00:04:11.029 END TEST allowed 00:04:11.029 ************************************ 00:04:11.029 00:04:11.029 real 0m11.906s 00:04:11.029 user 0m3.036s 00:04:11.029 sys 0m3.903s 00:04:11.029 04:53:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.029 ************************************ 00:04:11.029 04:53:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:11.029 END TEST acl 00:04:11.029 ************************************ 00:04:11.029 04:53:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.029 04:53:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.029 04:53:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.029 04:53:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.029 ************************************ 00:04:11.029 START TEST hugepages 00:04:11.029 ************************************ 00:04:11.029 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.029 * Looking for test storage... 00:04:11.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5806648 kB' 'MemAvailable: 7391700 kB' 'Buffers: 2436 kB' 'Cached: 1798312 kB' 'SwapCached: 0 kB' 'Active: 445076 kB' 'Inactive: 1458252 kB' 'Active(anon): 113092 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 104292 kB' 'Mapped: 48632 kB' 'Shmem: 10512 kB' 'KReclaimable: 63508 kB' 'Slab: 136472 kB' 'SReclaimable: 63508 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6460 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.029 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.030 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:11.031 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:11.031 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.031 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.031 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.031 ************************************ 00:04:11.031 START TEST default_setup 00:04:11.031 ************************************ 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.031 04:53:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.169 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.169 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.169 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.169 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.169 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920020 kB' 'MemAvailable: 9504872 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462316 kB' 'Inactive: 1458272 kB' 'Active(anon): 130332 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121700 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135724 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6416 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.170 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920020 kB' 'MemAvailable: 9504872 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462284 kB' 'Inactive: 1458272 kB' 'Active(anon): 130300 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121668 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135724 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6400 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.171 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.172 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.173 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7919768 kB' 'MemAvailable: 9504620 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462296 kB' 'Inactive: 1458272 kB' 'Active(anon): 130312 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121424 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135724 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6384 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.451 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.453 nr_hugepages=1024 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.453 resv_hugepages=0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.453 surplus_hugepages=0 00:04:12.453 anon_hugepages=0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7919768 kB' 'MemAvailable: 9504620 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462800 kB' 'Inactive: 1458272 kB' 'Active(anon): 130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135712 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72644 kB' 'KernelStack: 6464 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.453 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.454 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7919768 kB' 'MemUsed: 4322208 kB' 'SwapCached: 0 kB' 'Active: 462280 kB' 'Inactive: 1458276 kB' 'Active(anon): 130296 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1800732 kB' 'Mapped: 48636 kB' 'AnonPages: 121768 kB' 'Shmem: 10472 kB' 'KernelStack: 6432 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63068 kB' 'Slab: 135704 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.455 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.456 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:12.457 node0=1024 expecting 1024 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.457 00:04:12.457 real 0m1.375s 00:04:12.457 user 0m0.594s 00:04:12.457 sys 0m0.752s 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.457 ************************************ 00:04:12.457 END TEST default_setup 00:04:12.457 ************************************ 00:04:12.457 04:53:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:12.457 04:53:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:12.457 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.457 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.457 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.457 ************************************ 00:04:12.457 START TEST per_node_1G_alloc 00:04:12.457 ************************************ 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.457 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.992 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.993 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.993 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.993 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8961172 kB' 'MemAvailable: 10546028 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 463596 kB' 'Inactive: 1458276 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48640 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135792 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72724 kB' 'KernelStack: 6452 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.993 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.994 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8961172 kB' 'MemAvailable: 10546028 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462372 kB' 'Inactive: 1458276 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135808 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72740 kB' 'KernelStack: 6416 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.995 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.996 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8961424 kB' 'MemAvailable: 10546280 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462488 kB' 'Inactive: 1458276 kB' 'Active(anon): 130504 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121628 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135808 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72740 kB' 'KernelStack: 6432 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.997 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.998 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.999 nr_hugepages=512 00:04:12.999 resv_hugepages=0 00:04:12.999 surplus_hugepages=0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.999 anon_hugepages=0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8961944 kB' 'MemAvailable: 10546800 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462464 kB' 'Inactive: 1458276 kB' 'Active(anon): 130480 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121820 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135808 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72740 kB' 'KernelStack: 6432 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.999 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.000 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.260 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8962088 kB' 'MemUsed: 3279888 kB' 'SwapCached: 0 kB' 'Active: 462364 kB' 'Inactive: 1458276 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1800732 kB' 'Mapped: 48636 kB' 'AnonPages: 121748 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63068 kB' 'Slab: 135808 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.261 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.262 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.263 node0=512 expecting 512 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.263 00:04:13.263 real 0m0.713s 00:04:13.263 user 0m0.329s 00:04:13.263 sys 0m0.394s 00:04:13.263 ************************************ 00:04:13.263 END TEST per_node_1G_alloc 00:04:13.263 ************************************ 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.263 04:53:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.263 04:53:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:13.263 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.263 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.263 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.263 ************************************ 00:04:13.263 START TEST even_2G_alloc 00:04:13.263 ************************************ 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.263 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.787 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.787 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.787 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.787 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.787 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7916576 kB' 'MemAvailable: 9501432 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 463300 kB' 'Inactive: 1458276 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122156 kB' 'Mapped: 48688 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135736 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 6456 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.788 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.789 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7916828 kB' 'MemAvailable: 9501684 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462740 kB' 'Inactive: 1458276 kB' 'Active(anon): 130756 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121608 kB' 'Mapped: 48808 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135732 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72664 kB' 'KernelStack: 6360 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.790 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.791 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917140 kB' 'MemAvailable: 9501996 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462908 kB' 'Inactive: 1458276 kB' 'Active(anon): 130924 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121880 kB' 'Mapped: 48808 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135724 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6460 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.792 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.793 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.795 nr_hugepages=1024 00:04:13.795 resv_hugepages=0 00:04:13.795 surplus_hugepages=0 00:04:13.795 anon_hugepages=0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.795 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917020 kB' 'MemAvailable: 9501876 kB' 'Buffers: 2436 kB' 'Cached: 1798296 kB' 'SwapCached: 0 kB' 'Active: 462412 kB' 'Inactive: 1458276 kB' 'Active(anon): 130428 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121844 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135724 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6432 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.797 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917020 kB' 'MemUsed: 4324956 kB' 'SwapCached: 0 kB' 'Active: 462484 kB' 'Inactive: 1458276 kB' 'Active(anon): 130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1800732 kB' 'Mapped: 48636 kB' 'AnonPages: 121876 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63068 kB' 'Slab: 135720 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.799 node0=1024 expecting 1024 00:04:13.799 ************************************ 00:04:13.799 END TEST even_2G_alloc 00:04:13.799 ************************************ 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.799 00:04:13.799 real 0m0.701s 00:04:13.799 user 0m0.320s 00:04:13.799 sys 0m0.398s 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.799 04:53:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.059 04:53:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:14.059 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.059 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.059 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.059 ************************************ 00:04:14.059 START TEST odd_alloc 00:04:14.059 ************************************ 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.059 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.581 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.581 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.581 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.581 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922148 kB' 'MemAvailable: 9507008 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462812 kB' 'Inactive: 1458280 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121940 kB' 'Mapped: 49008 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135740 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72672 kB' 'KernelStack: 6440 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.581 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921900 kB' 'MemAvailable: 9506760 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462824 kB' 'Inactive: 1458280 kB' 'Active(anon): 130840 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121948 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135752 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72684 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.582 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.583 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921900 kB' 'MemAvailable: 9506760 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462644 kB' 'Inactive: 1458280 kB' 'Active(anon): 130660 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121756 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135744 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72676 kB' 'KernelStack: 6416 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.584 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.585 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:14.586 nr_hugepages=1025 00:04:14.586 resv_hugepages=0 00:04:14.586 surplus_hugepages=0 00:04:14.586 anon_hugepages=0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.586 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921900 kB' 'MemAvailable: 9506760 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462424 kB' 'Inactive: 1458280 kB' 'Active(anon): 130440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121856 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135744 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72676 kB' 'KernelStack: 6432 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.587 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922188 kB' 'MemUsed: 4319788 kB' 'SwapCached: 0 kB' 'Active: 462396 kB' 'Inactive: 1458280 kB' 'Active(anon): 130412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1800736 kB' 'Mapped: 48636 kB' 'AnonPages: 121776 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63068 kB' 'Slab: 135744 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.588 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.589 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.590 node0=1025 expecting 1025 00:04:14.590 ************************************ 00:04:14.590 END TEST odd_alloc 00:04:14.590 ************************************ 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:14.590 00:04:14.590 real 0m0.713s 00:04:14.590 user 0m0.341s 00:04:14.590 sys 0m0.389s 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.590 04:53:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.849 04:53:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:14.849 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.849 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.849 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.849 ************************************ 00:04:14.849 START TEST custom_alloc 00:04:14.849 ************************************ 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.849 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.109 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.109 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.109 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.109 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979244 kB' 'MemAvailable: 10564104 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462992 kB' 'Inactive: 1458280 kB' 'Active(anon): 131008 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122384 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135784 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6448 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.373 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.374 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979244 kB' 'MemAvailable: 10564104 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462428 kB' 'Inactive: 1458280 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121596 kB' 'Mapped: 48644 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135776 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6432 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.375 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.376 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979244 kB' 'MemAvailable: 10564104 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462468 kB' 'Inactive: 1458280 kB' 'Active(anon): 130484 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121888 kB' 'Mapped: 48644 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135776 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6448 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.377 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.378 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.379 nr_hugepages=512 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:15.379 resv_hugepages=0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.379 surplus_hugepages=0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.379 anon_hugepages=0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979244 kB' 'MemAvailable: 10564104 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 462436 kB' 'Inactive: 1458280 kB' 'Active(anon): 130452 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121856 kB' 'Mapped: 48644 kB' 'Shmem: 10472 kB' 'KReclaimable: 63068 kB' 'Slab: 135776 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6432 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.379 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.380 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979244 kB' 'MemUsed: 3262732 kB' 'SwapCached: 0 kB' 'Active: 462688 kB' 'Inactive: 1458280 kB' 'Active(anon): 130704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1800736 kB' 'Mapped: 48644 kB' 'AnonPages: 121860 kB' 'Shmem: 10472 kB' 'KernelStack: 6432 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63068 kB' 'Slab: 135776 kB' 'SReclaimable: 63068 kB' 'SUnreclaim: 72708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.381 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.382 node0=512 expecting 512 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:15.382 00:04:15.382 real 0m0.692s 00:04:15.382 user 0m0.332s 00:04:15.382 sys 0m0.407s 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.382 04:53:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.382 ************************************ 00:04:15.382 END TEST custom_alloc 00:04:15.382 ************************************ 00:04:15.382 04:53:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:15.382 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.382 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.382 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.382 ************************************ 00:04:15.382 START TEST no_shrink_alloc 00:04:15.382 ************************************ 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.382 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.954 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.954 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.954 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.954 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935616 kB' 'MemAvailable: 9520468 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459624 kB' 'Inactive: 1458280 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118980 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135644 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72588 kB' 'KernelStack: 6324 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.954 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.955 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935824 kB' 'MemAvailable: 9520676 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459072 kB' 'Inactive: 1458280 kB' 'Active(anon): 127088 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118468 kB' 'Mapped: 47896 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135592 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72536 kB' 'KernelStack: 6336 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.956 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.957 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935824 kB' 'MemAvailable: 9520676 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459036 kB' 'Inactive: 1458280 kB' 'Active(anon): 127052 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118380 kB' 'Mapped: 47896 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135592 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72536 kB' 'KernelStack: 6304 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.958 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.959 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.960 nr_hugepages=1024 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.960 resv_hugepages=0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.960 surplus_hugepages=0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.960 anon_hugepages=0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935824 kB' 'MemAvailable: 9520676 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459132 kB' 'Inactive: 1458280 kB' 'Active(anon): 127148 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118252 kB' 'Mapped: 47896 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135592 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72536 kB' 'KernelStack: 6336 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.960 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.221 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.222 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935824 kB' 'MemUsed: 4306152 kB' 'SwapCached: 0 kB' 'Active: 459132 kB' 'Inactive: 1458280 kB' 'Active(anon): 127148 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1800736 kB' 'Mapped: 47896 kB' 'AnonPages: 118512 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63056 kB' 'Slab: 135592 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.223 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.224 node0=1024 expecting 1024 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.224 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.747 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.747 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.747 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.747 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.747 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934264 kB' 'MemAvailable: 9519116 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459636 kB' 'Inactive: 1458280 kB' 'Active(anon): 127652 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118748 kB' 'Mapped: 47856 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135544 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6356 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.747 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.748 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934516 kB' 'MemAvailable: 9519368 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459416 kB' 'Inactive: 1458280 kB' 'Active(anon): 127432 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 47932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135552 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72496 kB' 'KernelStack: 6304 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.749 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.750 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934264 kB' 'MemAvailable: 9519116 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459124 kB' 'Inactive: 1458280 kB' 'Active(anon): 127140 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 47896 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135556 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72500 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.751 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.752 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.753 nr_hugepages=1024 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.753 resv_hugepages=0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.753 surplus_hugepages=0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.753 anon_hugepages=0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934264 kB' 'MemAvailable: 9519116 kB' 'Buffers: 2436 kB' 'Cached: 1798300 kB' 'SwapCached: 0 kB' 'Active: 459092 kB' 'Inactive: 1458280 kB' 'Active(anon): 127108 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118508 kB' 'Mapped: 47896 kB' 'Shmem: 10472 kB' 'KReclaimable: 63056 kB' 'Slab: 135556 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72500 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.753 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.754 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934264 kB' 'MemUsed: 4307712 kB' 'SwapCached: 0 kB' 'Active: 459056 kB' 'Inactive: 1458280 kB' 'Active(anon): 127072 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1458280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1800736 kB' 'Mapped: 47896 kB' 'AnonPages: 118468 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63056 kB' 'Slab: 135552 kB' 'SReclaimable: 63056 kB' 'SUnreclaim: 72496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.755 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.756 node0=1024 expecting 1024 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.756 04:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.756 00:04:16.756 real 0m1.343s 00:04:16.756 user 0m0.660s 00:04:16.756 sys 0m0.775s 00:04:16.757 04:53:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.757 04:53:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.757 ************************************ 00:04:16.757 END TEST no_shrink_alloc 00:04:16.757 ************************************ 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.757 04:53:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.757 ************************************ 00:04:16.757 END TEST hugepages 00:04:16.757 ************************************ 00:04:16.757 00:04:16.757 real 0m5.969s 00:04:16.757 user 0m2.738s 00:04:16.757 sys 0m3.362s 00:04:16.757 04:53:31 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.757 04:53:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.016 04:53:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:17.016 04:53:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.016 04:53:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.016 04:53:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.016 ************************************ 00:04:17.016 START TEST driver 00:04:17.016 ************************************ 00:04:17.016 04:53:31 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:17.016 * Looking for test storage... 00:04:17.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.016 04:53:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:17.016 04:53:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.016 04:53:31 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.585 04:53:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:23.585 04:53:37 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.585 04:53:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.585 04:53:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.585 ************************************ 00:04:23.585 START TEST guess_driver 00:04:23.585 ************************************ 00:04:23.585 04:53:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:23.585 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:23.585 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:23.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:23.586 Looking for driver=uio_pci_generic 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:23.586 04:53:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.163 04:53:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.744 00:04:30.744 real 0m7.248s 00:04:30.744 user 0m0.798s 00:04:30.744 sys 0m1.527s 00:04:30.744 04:53:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.744 ************************************ 00:04:30.744 04:53:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 END TEST guess_driver 00:04:30.744 ************************************ 00:04:30.744 00:04:30.744 real 0m13.297s 00:04:30.744 user 0m1.135s 00:04:30.744 sys 0m2.322s 00:04:30.744 04:53:44 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.744 04:53:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 ************************************ 00:04:30.744 END TEST driver 00:04:30.744 ************************************ 00:04:30.744 04:53:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:30.744 04:53:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.744 04:53:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.744 04:53:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 ************************************ 00:04:30.744 START TEST devices 00:04:30.744 ************************************ 00:04:30.744 04:53:44 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:30.744 * Looking for test storage... 00:04:30.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.744 04:53:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:30.744 04:53:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:30.744 04:53:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.744 04:53:44 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.680 04:53:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme2n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme2n2 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme2n3 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme3c3n1 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.680 04:53:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:31.681 04:53:45 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:04:31.681 04:53:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme3n1 00:04:31.681 04:53:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:31.681 04:53:45 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:31.681 04:53:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:31.681 04:53:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:31.681 04:53:45 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:31.681 No valid GPT data, bailing 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:31.681 No valid GPT data, bailing 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:31.681 No valid GPT data, bailing 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:31.681 No valid GPT data, bailing 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:31.681 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:31.681 04:53:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:31.940 No valid GPT data, bailing 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:31.940 No valid GPT data, bailing 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.940 04:53:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:31.940 04:53:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:31.940 04:53:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:31.940 04:53:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.940 04:53:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.940 04:53:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.940 ************************************ 00:04:31.940 START TEST nvme_mount 00:04:31.940 ************************************ 00:04:31.940 04:53:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:31.940 04:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:31.940 04:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.941 04:53:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:32.878 Creating new GPT entries in memory. 00:04:32.878 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.878 other utilities. 00:04:32.878 04:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.878 04:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.878 04:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.878 04:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.878 04:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:34.277 Creating new GPT entries in memory. 00:04:34.277 The operation has completed successfully. 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59452 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:34.277 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.278 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.537 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.537 04:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.537 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.537 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.537 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.537 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.795 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.795 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.055 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.055 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.313 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:35.313 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:35.313 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.313 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.313 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.314 04:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.572 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.830 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.088 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.089 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.346 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.347 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.347 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.347 04:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.347 04:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.347 04:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.605 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.863 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.121 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.121 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.379 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.379 00:04:37.379 real 0m5.462s 00:04:37.379 user 0m1.502s 00:04:37.379 sys 0m1.631s 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.379 04:53:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.379 ************************************ 00:04:37.379 END TEST nvme_mount 00:04:37.379 ************************************ 00:04:37.379 04:53:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.379 04:53:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.379 04:53:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.379 04:53:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.379 ************************************ 00:04:37.379 START TEST dm_mount 00:04:37.379 ************************************ 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.379 04:53:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.755 Creating new GPT entries in memory. 00:04:38.755 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.755 other utilities. 00:04:38.755 04:53:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.755 04:53:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.755 04:53:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.755 04:53:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.755 04:53:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:39.689 Creating new GPT entries in memory. 00:04:39.689 The operation has completed successfully. 00:04:39.689 04:53:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.689 04:53:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.689 04:53:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.689 04:53:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.689 04:53:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:40.626 The operation has completed successfully. 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60091 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.626 04:53:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.885 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.144 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.403 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.403 04:53:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.662 04:53:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.921 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.179 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.179 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.179 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.179 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.437 04:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:42.696 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:42.696 00:04:42.696 real 0m5.216s 00:04:42.696 user 0m1.011s 00:04:42.696 sys 0m1.134s 00:04:42.696 ************************************ 00:04:42.696 END TEST dm_mount 00:04:42.696 ************************************ 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.696 04:53:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.696 04:53:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.955 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.956 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.956 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.956 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.956 04:53:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.956 00:04:42.956 real 0m12.767s 00:04:42.956 user 0m3.442s 00:04:42.956 sys 0m3.615s 00:04:42.956 04:53:57 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.956 ************************************ 00:04:42.956 04:53:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.956 END TEST devices 00:04:42.956 ************************************ 00:04:42.956 00:04:42.956 real 0m44.230s 00:04:42.956 user 0m10.457s 00:04:42.956 sys 0m13.378s 00:04:42.956 04:53:57 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.956 04:53:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.956 ************************************ 00:04:42.956 END TEST setup.sh 00:04:42.956 ************************************ 00:04:43.240 04:53:57 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:43.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.075 Hugepages 00:04:44.075 node hugesize free / total 00:04:44.075 node0 1048576kB 0 / 0 00:04:44.075 node0 2048kB 2048 / 2048 00:04:44.075 00:04:44.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:44.075 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:44.333 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:44.333 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:44.333 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:44.333 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:44.333 04:53:58 -- spdk/autotest.sh@130 -- # uname -s 00:04:44.333 04:53:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:44.334 04:53:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:44.334 04:53:58 -- common/autotest_common.sh@1529 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.470 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.470 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.470 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.728 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.728 04:54:00 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:46.664 04:54:01 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:46.664 04:54:01 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:46.664 04:54:01 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:46.664 04:54:01 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:46.664 04:54:01 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:46.664 04:54:01 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:46.664 04:54:01 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.664 04:54:01 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.664 04:54:01 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:46.664 04:54:01 -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:04:46.664 04:54:01 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:46.664 04:54:01 -- common/autotest_common.sh@1534 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.231 Waiting for block devices as requested 00:04:47.489 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.489 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.489 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.748 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.013 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:53.013 04:54:07 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:53.013 04:54:07 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # grep 0000:00:10.0/nvme/nvme 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme1 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:53.013 04:54:07 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:53.013 04:54:07 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme1 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:53.013 04:54:07 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1555 -- # continue 00:04:53.013 04:54:07 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:53.013 04:54:07 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # grep 0000:00:11.0/nvme/nvme 00:04:53.013 04:54:07 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:53.013 04:54:07 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:53.013 04:54:07 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:53.013 04:54:07 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:53.013 04:54:07 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:53.014 04:54:07 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1555 -- # continue 00:04:53.014 04:54:07 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:53.014 04:54:07 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # grep 0000:00:12.0/nvme/nvme 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme2 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:53.014 04:54:07 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:53.014 04:54:07 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:53.014 04:54:07 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1555 -- # continue 00:04:53.014 04:54:07 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:53.014 04:54:07 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # grep 0000:00:13.0/nvme/nvme 00:04:53.014 04:54:07 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme3 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:53.014 04:54:07 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:53.014 04:54:07 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme3 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:53.014 04:54:07 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:53.014 04:54:07 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:53.014 04:54:07 -- common/autotest_common.sh@1555 -- # continue 00:04:53.014 04:54:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:53.014 04:54:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.014 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.014 04:54:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:53.014 04:54:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.014 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.014 04:54:07 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.837 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.837 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.837 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.096 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.096 04:54:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:54.096 04:54:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.096 04:54:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.096 04:54:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:54.096 04:54:08 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:54.096 04:54:08 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.096 04:54:08 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:54.096 04:54:08 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:54.096 04:54:08 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:54.096 04:54:08 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:54.096 04:54:08 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:54.096 04:54:08 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.096 04:54:08 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.096 04:54:08 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:54.096 04:54:08 -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:04:54.096 04:54:08 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:54.096 04:54:08 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:54.096 04:54:08 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:54.096 04:54:08 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:54.096 04:54:08 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.096 04:54:08 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:54.096 04:54:08 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:54.096 04:54:08 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:54.096 04:54:08 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.096 04:54:08 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:54.096 04:54:08 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:54.354 04:54:08 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:54.354 04:54:08 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.354 04:54:08 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:54.354 04:54:08 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:54.354 04:54:08 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:54.354 04:54:08 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.354 04:54:08 -- common/autotest_common.sh@1584 -- # printf '%s\n' 00:04:54.354 04:54:08 -- common/autotest_common.sh@1590 -- # [[ -z '' ]] 00:04:54.354 04:54:08 -- common/autotest_common.sh@1591 -- # return 0 00:04:54.354 04:54:08 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:54.354 04:54:08 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:54.354 04:54:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.354 04:54:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.354 04:54:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:54.354 04:54:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.354 04:54:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.354 04:54:08 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:54.354 04:54:08 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.354 04:54:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.354 04:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.354 04:54:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.354 ************************************ 00:04:54.354 START TEST env 00:04:54.354 ************************************ 00:04:54.354 04:54:08 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.354 * Looking for test storage... 00:04:54.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:54.354 04:54:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.354 04:54:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.354 04:54:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.354 04:54:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.354 ************************************ 00:04:54.354 START TEST env_memory 00:04:54.354 ************************************ 00:04:54.354 04:54:08 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.354 00:04:54.354 00:04:54.354 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.354 http://cunit.sourceforge.net/ 00:04:54.354 00:04:54.354 00:04:54.354 Suite: memory 00:04:54.354 Test: alloc and free memory map ...[2024-07-24 04:54:08.922218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.354 passed 00:04:54.354 Test: mem map translation ...[2024-07-24 04:54:08.982883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.354 [2024-07-24 04:54:08.982986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.354 [2024-07-24 04:54:08.983102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.354 [2024-07-24 04:54:08.983135] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.623 passed 00:04:54.623 Test: mem map registration ...[2024-07-24 04:54:09.080965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:54.623 [2024-07-24 04:54:09.081038] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:54.623 passed 00:04:54.623 Test: mem map adjacent registrations ...passed 00:04:54.623 00:04:54.623 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.623 suites 1 1 n/a 0 0 00:04:54.623 tests 4 4 4 0 0 00:04:54.623 asserts 152 152 152 0 n/a 00:04:54.623 00:04:54.623 Elapsed time = 0.342 seconds 00:04:54.623 00:04:54.623 real 0m0.381s 00:04:54.623 user 0m0.355s 00:04:54.623 sys 0m0.023s 00:04:54.623 04:54:09 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.623 04:54:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.623 ************************************ 00:04:54.623 END TEST env_memory 00:04:54.623 ************************************ 00:04:54.920 04:54:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.920 04:54:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.920 04:54:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.920 04:54:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.920 ************************************ 00:04:54.920 START TEST env_vtophys 00:04:54.920 ************************************ 00:04:54.920 04:54:09 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.920 EAL: lib.eal log level changed from notice to debug 00:04:54.920 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 1 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 2 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 3 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 4 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 5 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 6 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 7 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 8 as core 0 on socket 0 00:04:54.920 EAL: Detected lcore 9 as core 0 on socket 0 00:04:54.920 EAL: Maximum logical cores by configuration: 128 00:04:54.920 EAL: Detected CPU lcores: 10 00:04:54.920 EAL: Detected NUMA nodes: 1 00:04:54.920 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:54.920 EAL: Detected shared linkage of DPDK 00:04:54.920 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.920 EAL: Selected IOVA mode 'PA' 00:04:54.920 EAL: Probing VFIO support... 00:04:54.920 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.920 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:54.920 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.920 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.920 EAL: Setting up physically contiguous memory... 00:04:54.920 EAL: Setting maximum number of open files to 524288 00:04:54.920 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.920 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.920 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.920 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.920 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.920 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.920 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.920 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.920 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.920 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.920 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.920 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.920 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.920 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.920 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.920 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.920 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.920 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.920 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.920 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.920 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.920 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.920 EAL: Hugepages will be freed exactly as allocated. 00:04:54.920 EAL: No shared files mode enabled, IPC is disabled 00:04:54.920 EAL: No shared files mode enabled, IPC is disabled 00:04:54.920 EAL: TSC frequency is ~2200000 KHz 00:04:54.920 EAL: Main lcore 0 is ready (tid=7fe5e7419a40;cpuset=[0]) 00:04:54.920 EAL: Trying to obtain current memory policy. 00:04:54.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.920 EAL: Restoring previous memory policy: 0 00:04:54.920 EAL: request: mp_malloc_sync 00:04:54.920 EAL: No shared files mode enabled, IPC is disabled 00:04:54.920 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.920 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.920 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.920 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.920 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:54.920 00:04:54.920 00:04:54.920 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.920 http://cunit.sourceforge.net/ 00:04:54.920 00:04:54.920 00:04:54.920 Suite: components_suite 00:04:55.491 Test: vtophys_malloc_test ...passed 00:04:55.491 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 4MB 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was shrunk by 4MB 00:04:55.491 EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 6MB 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was shrunk by 6MB 00:04:55.491 EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 10MB 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was shrunk by 10MB 00:04:55.491 EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 18MB 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was shrunk by 18MB 00:04:55.491 EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 34MB 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was shrunk by 34MB 00:04:55.491 EAL: Trying to obtain current memory policy. 00:04:55.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.491 EAL: Restoring previous memory policy: 4 00:04:55.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.491 EAL: request: mp_malloc_sync 00:04:55.491 EAL: No shared files mode enabled, IPC is disabled 00:04:55.491 EAL: Heap on socket 0 was expanded by 66MB 00:04:55.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.749 EAL: request: mp_malloc_sync 00:04:55.749 EAL: No shared files mode enabled, IPC is disabled 00:04:55.749 EAL: Heap on socket 0 was shrunk by 66MB 00:04:55.749 EAL: Trying to obtain current memory policy. 00:04:55.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.749 EAL: Restoring previous memory policy: 4 00:04:55.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.749 EAL: request: mp_malloc_sync 00:04:55.749 EAL: No shared files mode enabled, IPC is disabled 00:04:55.749 EAL: Heap on socket 0 was expanded by 130MB 00:04:56.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.007 EAL: request: mp_malloc_sync 00:04:56.007 EAL: No shared files mode enabled, IPC is disabled 00:04:56.007 EAL: Heap on socket 0 was shrunk by 130MB 00:04:56.007 EAL: Trying to obtain current memory policy. 00:04:56.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.007 EAL: Restoring previous memory policy: 4 00:04:56.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.007 EAL: request: mp_malloc_sync 00:04:56.007 EAL: No shared files mode enabled, IPC is disabled 00:04:56.007 EAL: Heap on socket 0 was expanded by 258MB 00:04:56.572 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.572 EAL: request: mp_malloc_sync 00:04:56.572 EAL: No shared files mode enabled, IPC is disabled 00:04:56.572 EAL: Heap on socket 0 was shrunk by 258MB 00:04:56.830 EAL: Trying to obtain current memory policy. 00:04:56.830 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.830 EAL: Restoring previous memory policy: 4 00:04:56.830 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.830 EAL: request: mp_malloc_sync 00:04:56.830 EAL: No shared files mode enabled, IPC is disabled 00:04:56.830 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.396 EAL: request: mp_malloc_sync 00:04:57.396 EAL: No shared files mode enabled, IPC is disabled 00:04:57.396 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.329 EAL: Trying to obtain current memory policy. 00:04:58.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.329 EAL: Restoring previous memory policy: 4 00:04:58.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.329 EAL: request: mp_malloc_sync 00:04:58.329 EAL: No shared files mode enabled, IPC is disabled 00:04:58.329 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.700 EAL: request: mp_malloc_sync 00:04:59.700 EAL: No shared files mode enabled, IPC is disabled 00:04:59.700 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:01.073 passed 00:05:01.073 00:05:01.073 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.073 suites 1 1 n/a 0 0 00:05:01.073 tests 2 2 2 0 0 00:05:01.073 asserts 5418 5418 5418 0 n/a 00:05:01.073 00:05:01.073 Elapsed time = 5.798 seconds 00:05:01.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.073 EAL: request: mp_malloc_sync 00:05:01.073 EAL: No shared files mode enabled, IPC is disabled 00:05:01.073 EAL: Heap on socket 0 was shrunk by 2MB 00:05:01.073 EAL: No shared files mode enabled, IPC is disabled 00:05:01.073 EAL: No shared files mode enabled, IPC is disabled 00:05:01.073 EAL: No shared files mode enabled, IPC is disabled 00:05:01.073 00:05:01.073 real 0m6.105s 00:05:01.073 user 0m5.286s 00:05:01.073 sys 0m0.668s 00:05:01.073 04:54:15 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.073 ************************************ 00:05:01.073 04:54:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 END TEST env_vtophys 00:05:01.073 ************************************ 00:05:01.073 04:54:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.073 04:54:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.073 04:54:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.073 04:54:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 ************************************ 00:05:01.073 START TEST env_pci 00:05:01.073 ************************************ 00:05:01.073 04:54:15 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.073 00:05:01.073 00:05:01.073 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.073 http://cunit.sourceforge.net/ 00:05:01.073 00:05:01.073 00:05:01.073 Suite: pci 00:05:01.073 Test: pci_hook ...[2024-07-24 04:54:15.463546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61908 has claimed it 00:05:01.073 passed 00:05:01.073 00:05:01.073 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.073 suites 1 1 n/a 0 0 00:05:01.073 tests 1 1 1 0 0 00:05:01.073 asserts 25 25 25 0 n/a 00:05:01.073 00:05:01.073 Elapsed time = 0.005 seconds 00:05:01.073 EAL: Cannot find device (10000:00:01.0) 00:05:01.073 EAL: Failed to attach device on primary process 00:05:01.073 ************************************ 00:05:01.073 END TEST env_pci 00:05:01.073 ************************************ 00:05:01.073 00:05:01.073 real 0m0.067s 00:05:01.073 user 0m0.033s 00:05:01.073 sys 0m0.035s 00:05:01.073 04:54:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.073 04:54:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 04:54:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:01.073 04:54:15 env -- env/env.sh@15 -- # uname 00:05:01.073 04:54:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:01.073 04:54:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:01.073 04:54:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.073 04:54:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:01.073 04:54:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.073 04:54:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 ************************************ 00:05:01.073 START TEST env_dpdk_post_init 00:05:01.073 ************************************ 00:05:01.073 04:54:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.073 EAL: Detected CPU lcores: 10 00:05:01.073 EAL: Detected NUMA nodes: 1 00:05:01.073 EAL: Detected shared linkage of DPDK 00:05:01.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.073 EAL: Selected IOVA mode 'PA' 00:05:01.331 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.331 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:01.331 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:01.332 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:01.332 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:01.332 Starting DPDK initialization... 00:05:01.332 Starting SPDK post initialization... 00:05:01.332 SPDK NVMe probe 00:05:01.332 Attaching to 0000:00:10.0 00:05:01.332 Attaching to 0000:00:11.0 00:05:01.332 Attaching to 0000:00:12.0 00:05:01.332 Attaching to 0000:00:13.0 00:05:01.332 Attached to 0000:00:10.0 00:05:01.332 Attached to 0000:00:11.0 00:05:01.332 Attached to 0000:00:13.0 00:05:01.332 Attached to 0000:00:12.0 00:05:01.332 Cleaning up... 00:05:01.332 ************************************ 00:05:01.332 END TEST env_dpdk_post_init 00:05:01.332 ************************************ 00:05:01.332 00:05:01.332 real 0m0.285s 00:05:01.332 user 0m0.093s 00:05:01.332 sys 0m0.094s 00:05:01.332 04:54:15 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.332 04:54:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.332 04:54:15 env -- env/env.sh@26 -- # uname 00:05:01.332 04:54:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.332 04:54:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.332 04:54:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.332 04:54:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.332 04:54:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.332 ************************************ 00:05:01.332 START TEST env_mem_callbacks 00:05:01.332 ************************************ 00:05:01.332 04:54:15 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.332 EAL: Detected CPU lcores: 10 00:05:01.332 EAL: Detected NUMA nodes: 1 00:05:01.332 EAL: Detected shared linkage of DPDK 00:05:01.590 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.590 EAL: Selected IOVA mode 'PA' 00:05:01.590 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.590 00:05:01.590 00:05:01.590 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.590 http://cunit.sourceforge.net/ 00:05:01.590 00:05:01.590 00:05:01.590 Suite: memory 00:05:01.590 Test: test ... 00:05:01.590 register 0x200000200000 2097152 00:05:01.590 malloc 3145728 00:05:01.590 register 0x200000400000 4194304 00:05:01.590 buf 0x2000004fffc0 len 3145728 PASSED 00:05:01.590 malloc 64 00:05:01.590 buf 0x2000004ffec0 len 64 PASSED 00:05:01.590 malloc 4194304 00:05:01.590 register 0x200000800000 6291456 00:05:01.590 buf 0x2000009fffc0 len 4194304 PASSED 00:05:01.590 free 0x2000004fffc0 3145728 00:05:01.590 free 0x2000004ffec0 64 00:05:01.590 unregister 0x200000400000 4194304 PASSED 00:05:01.590 free 0x2000009fffc0 4194304 00:05:01.590 unregister 0x200000800000 6291456 PASSED 00:05:01.590 malloc 8388608 00:05:01.590 register 0x200000400000 10485760 00:05:01.590 buf 0x2000005fffc0 len 8388608 PASSED 00:05:01.590 free 0x2000005fffc0 8388608 00:05:01.590 unregister 0x200000400000 10485760 PASSED 00:05:01.590 passed 00:05:01.590 00:05:01.590 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.590 suites 1 1 n/a 0 0 00:05:01.590 tests 1 1 1 0 0 00:05:01.590 asserts 15 15 15 0 n/a 00:05:01.590 00:05:01.590 Elapsed time = 0.053 seconds 00:05:01.590 00:05:01.590 real 0m0.241s 00:05:01.590 user 0m0.081s 00:05:01.590 sys 0m0.056s 00:05:01.590 04:54:16 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.590 04:54:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.590 ************************************ 00:05:01.590 END TEST env_mem_callbacks 00:05:01.590 ************************************ 00:05:01.590 00:05:01.590 real 0m7.430s 00:05:01.590 user 0m5.962s 00:05:01.590 sys 0m1.090s 00:05:01.590 04:54:16 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.590 04:54:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.590 ************************************ 00:05:01.590 END TEST env 00:05:01.590 ************************************ 00:05:01.849 04:54:16 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.849 04:54:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.849 04:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.849 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:01.849 ************************************ 00:05:01.849 START TEST rpc 00:05:01.849 ************************************ 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.849 * Looking for test storage... 00:05:01.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.849 04:54:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62027 00:05:01.849 04:54:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.849 04:54:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.849 04:54:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62027 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@829 -- # '[' -z 62027 ']' 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.849 04:54:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.107 [2024-07-24 04:54:16.495854] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:02.107 [2024-07-24 04:54:16.496049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62027 ] 00:05:02.107 [2024-07-24 04:54:16.665472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.365 [2024-07-24 04:54:16.831894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.365 [2024-07-24 04:54:16.831964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62027' to capture a snapshot of events at runtime. 00:05:02.365 [2024-07-24 04:54:16.831983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.365 [2024-07-24 04:54:16.831996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.365 [2024-07-24 04:54:16.832009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62027 for offline analysis/debug. 00:05:02.365 [2024-07-24 04:54:16.832060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.932 04:54:17 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.932 04:54:17 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:02.932 04:54:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.932 04:54:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.932 04:54:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.932 04:54:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.932 04:54:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.932 04:54:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.932 04:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 ************************************ 00:05:02.932 START TEST rpc_integrity 00:05:02.932 ************************************ 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.932 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.932 { 00:05:02.932 "name": "Malloc0", 00:05:02.932 "aliases": [ 00:05:02.932 "7189c0f4-4005-431f-9d80-87fe3b11ddec" 00:05:02.932 ], 00:05:02.932 "product_name": "Malloc disk", 00:05:02.932 "block_size": 512, 00:05:02.932 "num_blocks": 16384, 00:05:02.932 "uuid": "7189c0f4-4005-431f-9d80-87fe3b11ddec", 00:05:02.932 "assigned_rate_limits": { 00:05:02.932 "rw_ios_per_sec": 0, 00:05:02.932 "rw_mbytes_per_sec": 0, 00:05:02.932 "r_mbytes_per_sec": 0, 00:05:02.932 "w_mbytes_per_sec": 0 00:05:02.932 }, 00:05:02.932 "claimed": false, 00:05:02.933 "zoned": false, 00:05:02.933 "supported_io_types": { 00:05:02.933 "read": true, 00:05:02.933 "write": true, 00:05:02.933 "unmap": true, 00:05:02.933 "flush": true, 00:05:02.933 "reset": true, 00:05:02.933 "nvme_admin": false, 00:05:02.933 "nvme_io": false, 00:05:02.933 "nvme_io_md": false, 00:05:02.933 "write_zeroes": true, 00:05:02.933 "zcopy": true, 00:05:02.933 "get_zone_info": false, 00:05:02.933 "zone_management": false, 00:05:02.933 "zone_append": false, 00:05:02.933 "compare": false, 00:05:02.933 "compare_and_write": false, 00:05:02.933 "abort": true, 00:05:02.933 "seek_hole": false, 00:05:02.933 "seek_data": false, 00:05:02.933 "copy": true, 00:05:02.933 "nvme_iov_md": false 00:05:02.933 }, 00:05:02.933 "memory_domains": [ 00:05:02.933 { 00:05:02.933 "dma_device_id": "system", 00:05:02.933 "dma_device_type": 1 00:05:02.933 }, 00:05:02.933 { 00:05:02.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.933 "dma_device_type": 2 00:05:02.933 } 00:05:02.933 ], 00:05:02.933 "driver_specific": {} 00:05:02.933 } 00:05:02.933 ]' 00:05:03.190 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.190 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.190 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.190 [2024-07-24 04:54:17.621646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.190 [2024-07-24 04:54:17.621731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.190 [2024-07-24 04:54:17.621766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:03.190 [2024-07-24 04:54:17.621779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.190 [2024-07-24 04:54:17.624569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.190 [2024-07-24 04:54:17.624608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.190 Passthru0 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.190 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.190 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.190 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.190 { 00:05:03.190 "name": "Malloc0", 00:05:03.190 "aliases": [ 00:05:03.190 "7189c0f4-4005-431f-9d80-87fe3b11ddec" 00:05:03.190 ], 00:05:03.190 "product_name": "Malloc disk", 00:05:03.190 "block_size": 512, 00:05:03.190 "num_blocks": 16384, 00:05:03.190 "uuid": "7189c0f4-4005-431f-9d80-87fe3b11ddec", 00:05:03.190 "assigned_rate_limits": { 00:05:03.190 "rw_ios_per_sec": 0, 00:05:03.190 "rw_mbytes_per_sec": 0, 00:05:03.190 "r_mbytes_per_sec": 0, 00:05:03.190 "w_mbytes_per_sec": 0 00:05:03.190 }, 00:05:03.190 "claimed": true, 00:05:03.190 "claim_type": "exclusive_write", 00:05:03.190 "zoned": false, 00:05:03.190 "supported_io_types": { 00:05:03.190 "read": true, 00:05:03.190 "write": true, 00:05:03.190 "unmap": true, 00:05:03.190 "flush": true, 00:05:03.190 "reset": true, 00:05:03.190 "nvme_admin": false, 00:05:03.190 "nvme_io": false, 00:05:03.190 "nvme_io_md": false, 00:05:03.190 "write_zeroes": true, 00:05:03.190 "zcopy": true, 00:05:03.190 "get_zone_info": false, 00:05:03.190 "zone_management": false, 00:05:03.190 "zone_append": false, 00:05:03.190 "compare": false, 00:05:03.190 "compare_and_write": false, 00:05:03.190 "abort": true, 00:05:03.190 "seek_hole": false, 00:05:03.190 "seek_data": false, 00:05:03.191 "copy": true, 00:05:03.191 "nvme_iov_md": false 00:05:03.191 }, 00:05:03.191 "memory_domains": [ 00:05:03.191 { 00:05:03.191 "dma_device_id": "system", 00:05:03.191 "dma_device_type": 1 00:05:03.191 }, 00:05:03.191 { 00:05:03.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.191 "dma_device_type": 2 00:05:03.191 } 00:05:03.191 ], 00:05:03.191 "driver_specific": {} 00:05:03.191 }, 00:05:03.191 { 00:05:03.191 "name": "Passthru0", 00:05:03.191 "aliases": [ 00:05:03.191 "ac69a9b1-50f7-5469-ad7e-9f191846aac0" 00:05:03.191 ], 00:05:03.191 "product_name": "passthru", 00:05:03.191 "block_size": 512, 00:05:03.191 "num_blocks": 16384, 00:05:03.191 "uuid": "ac69a9b1-50f7-5469-ad7e-9f191846aac0", 00:05:03.191 "assigned_rate_limits": { 00:05:03.191 "rw_ios_per_sec": 0, 00:05:03.191 "rw_mbytes_per_sec": 0, 00:05:03.191 "r_mbytes_per_sec": 0, 00:05:03.191 "w_mbytes_per_sec": 0 00:05:03.191 }, 00:05:03.191 "claimed": false, 00:05:03.191 "zoned": false, 00:05:03.191 "supported_io_types": { 00:05:03.191 "read": true, 00:05:03.191 "write": true, 00:05:03.191 "unmap": true, 00:05:03.191 "flush": true, 00:05:03.191 "reset": true, 00:05:03.191 "nvme_admin": false, 00:05:03.191 "nvme_io": false, 00:05:03.191 "nvme_io_md": false, 00:05:03.191 "write_zeroes": true, 00:05:03.191 "zcopy": true, 00:05:03.191 "get_zone_info": false, 00:05:03.191 "zone_management": false, 00:05:03.191 "zone_append": false, 00:05:03.191 "compare": false, 00:05:03.191 "compare_and_write": false, 00:05:03.191 "abort": true, 00:05:03.191 "seek_hole": false, 00:05:03.191 "seek_data": false, 00:05:03.191 "copy": true, 00:05:03.191 "nvme_iov_md": false 00:05:03.191 }, 00:05:03.191 "memory_domains": [ 00:05:03.191 { 00:05:03.191 "dma_device_id": "system", 00:05:03.191 "dma_device_type": 1 00:05:03.191 }, 00:05:03.191 { 00:05:03.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.191 "dma_device_type": 2 00:05:03.191 } 00:05:03.191 ], 00:05:03.191 "driver_specific": { 00:05:03.191 "passthru": { 00:05:03.191 "name": "Passthru0", 00:05:03.191 "base_bdev_name": "Malloc0" 00:05:03.191 } 00:05:03.191 } 00:05:03.191 } 00:05:03.191 ]' 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.191 ************************************ 00:05:03.191 END TEST rpc_integrity 00:05:03.191 ************************************ 00:05:03.191 04:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.191 00:05:03.191 real 0m0.352s 00:05:03.191 user 0m0.223s 00:05:03.191 sys 0m0.039s 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.191 04:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 04:54:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.448 04:54:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.448 04:54:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.448 04:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 ************************************ 00:05:03.448 START TEST rpc_plugins 00:05:03.448 ************************************ 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:03.448 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.448 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.448 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.448 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.449 { 00:05:03.449 "name": "Malloc1", 00:05:03.449 "aliases": [ 00:05:03.449 "0000b496-d0a3-4e58-8628-6e0210c88de3" 00:05:03.449 ], 00:05:03.449 "product_name": "Malloc disk", 00:05:03.449 "block_size": 4096, 00:05:03.449 "num_blocks": 256, 00:05:03.449 "uuid": "0000b496-d0a3-4e58-8628-6e0210c88de3", 00:05:03.449 "assigned_rate_limits": { 00:05:03.449 "rw_ios_per_sec": 0, 00:05:03.449 "rw_mbytes_per_sec": 0, 00:05:03.449 "r_mbytes_per_sec": 0, 00:05:03.449 "w_mbytes_per_sec": 0 00:05:03.449 }, 00:05:03.449 "claimed": false, 00:05:03.449 "zoned": false, 00:05:03.449 "supported_io_types": { 00:05:03.449 "read": true, 00:05:03.449 "write": true, 00:05:03.449 "unmap": true, 00:05:03.449 "flush": true, 00:05:03.449 "reset": true, 00:05:03.449 "nvme_admin": false, 00:05:03.449 "nvme_io": false, 00:05:03.449 "nvme_io_md": false, 00:05:03.449 "write_zeroes": true, 00:05:03.449 "zcopy": true, 00:05:03.449 "get_zone_info": false, 00:05:03.449 "zone_management": false, 00:05:03.449 "zone_append": false, 00:05:03.449 "compare": false, 00:05:03.449 "compare_and_write": false, 00:05:03.449 "abort": true, 00:05:03.449 "seek_hole": false, 00:05:03.449 "seek_data": false, 00:05:03.449 "copy": true, 00:05:03.449 "nvme_iov_md": false 00:05:03.449 }, 00:05:03.449 "memory_domains": [ 00:05:03.449 { 00:05:03.449 "dma_device_id": "system", 00:05:03.449 "dma_device_type": 1 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.449 "dma_device_type": 2 00:05:03.449 } 00:05:03.449 ], 00:05:03.449 "driver_specific": {} 00:05:03.449 } 00:05:03.449 ]' 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 04:54:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.449 04:54:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.449 ************************************ 00:05:03.449 END TEST rpc_plugins 00:05:03.449 ************************************ 00:05:03.449 04:54:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.449 00:05:03.449 real 0m0.154s 00:05:03.449 user 0m0.104s 00:05:03.449 sys 0m0.019s 00:05:03.449 04:54:18 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.449 04:54:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 04:54:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.449 04:54:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.449 04:54:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.449 04:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 ************************************ 00:05:03.449 START TEST rpc_trace_cmd_test 00:05:03.449 ************************************ 00:05:03.449 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:03.449 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.449 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.449 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.449 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.707 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62027", 00:05:03.707 "tpoint_group_mask": "0x8", 00:05:03.707 "iscsi_conn": { 00:05:03.707 "mask": "0x2", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "scsi": { 00:05:03.707 "mask": "0x4", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "bdev": { 00:05:03.707 "mask": "0x8", 00:05:03.707 "tpoint_mask": "0xffffffffffffffff" 00:05:03.707 }, 00:05:03.707 "nvmf_rdma": { 00:05:03.707 "mask": "0x10", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "nvmf_tcp": { 00:05:03.707 "mask": "0x20", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "ftl": { 00:05:03.707 "mask": "0x40", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "blobfs": { 00:05:03.707 "mask": "0x80", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "dsa": { 00:05:03.707 "mask": "0x200", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "thread": { 00:05:03.707 "mask": "0x400", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "nvme_pcie": { 00:05:03.707 "mask": "0x800", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "iaa": { 00:05:03.707 "mask": "0x1000", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "nvme_tcp": { 00:05:03.707 "mask": "0x2000", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "bdev_nvme": { 00:05:03.707 "mask": "0x4000", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 }, 00:05:03.707 "sock": { 00:05:03.707 "mask": "0x8000", 00:05:03.707 "tpoint_mask": "0x0" 00:05:03.707 } 00:05:03.707 }' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.707 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.965 ************************************ 00:05:03.965 END TEST rpc_trace_cmd_test 00:05:03.965 ************************************ 00:05:03.965 04:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.965 00:05:03.965 real 0m0.285s 00:05:03.965 user 0m0.249s 00:05:03.965 sys 0m0.026s 00:05:03.965 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 04:54:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.965 04:54:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.965 04:54:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.965 04:54:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.965 04:54:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.965 04:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 ************************************ 00:05:03.965 START TEST rpc_daemon_integrity 00:05:03.965 ************************************ 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.965 { 00:05:03.965 "name": "Malloc2", 00:05:03.965 "aliases": [ 00:05:03.965 "ef736d11-0f93-4634-bdbc-09b3a0ae1af5" 00:05:03.965 ], 00:05:03.965 "product_name": "Malloc disk", 00:05:03.965 "block_size": 512, 00:05:03.965 "num_blocks": 16384, 00:05:03.965 "uuid": "ef736d11-0f93-4634-bdbc-09b3a0ae1af5", 00:05:03.965 "assigned_rate_limits": { 00:05:03.965 "rw_ios_per_sec": 0, 00:05:03.965 "rw_mbytes_per_sec": 0, 00:05:03.965 "r_mbytes_per_sec": 0, 00:05:03.965 "w_mbytes_per_sec": 0 00:05:03.965 }, 00:05:03.965 "claimed": false, 00:05:03.965 "zoned": false, 00:05:03.965 "supported_io_types": { 00:05:03.965 "read": true, 00:05:03.965 "write": true, 00:05:03.965 "unmap": true, 00:05:03.965 "flush": true, 00:05:03.965 "reset": true, 00:05:03.965 "nvme_admin": false, 00:05:03.965 "nvme_io": false, 00:05:03.965 "nvme_io_md": false, 00:05:03.965 "write_zeroes": true, 00:05:03.965 "zcopy": true, 00:05:03.965 "get_zone_info": false, 00:05:03.965 "zone_management": false, 00:05:03.965 "zone_append": false, 00:05:03.965 "compare": false, 00:05:03.965 "compare_and_write": false, 00:05:03.965 "abort": true, 00:05:03.965 "seek_hole": false, 00:05:03.965 "seek_data": false, 00:05:03.965 "copy": true, 00:05:03.965 "nvme_iov_md": false 00:05:03.965 }, 00:05:03.965 "memory_domains": [ 00:05:03.965 { 00:05:03.965 "dma_device_id": "system", 00:05:03.965 "dma_device_type": 1 00:05:03.965 }, 00:05:03.965 { 00:05:03.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.965 "dma_device_type": 2 00:05:03.965 } 00:05:03.965 ], 00:05:03.965 "driver_specific": {} 00:05:03.965 } 00:05:03.965 ]' 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 [2024-07-24 04:54:18.552217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.965 [2024-07-24 04:54:18.552306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.965 [2024-07-24 04:54:18.552338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:03.965 [2024-07-24 04:54:18.552352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.965 [2024-07-24 04:54:18.555211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.965 [2024-07-24 04:54:18.555269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.965 Passthru0 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.965 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.965 { 00:05:03.965 "name": "Malloc2", 00:05:03.965 "aliases": [ 00:05:03.965 "ef736d11-0f93-4634-bdbc-09b3a0ae1af5" 00:05:03.965 ], 00:05:03.965 "product_name": "Malloc disk", 00:05:03.965 "block_size": 512, 00:05:03.966 "num_blocks": 16384, 00:05:03.966 "uuid": "ef736d11-0f93-4634-bdbc-09b3a0ae1af5", 00:05:03.966 "assigned_rate_limits": { 00:05:03.966 "rw_ios_per_sec": 0, 00:05:03.966 "rw_mbytes_per_sec": 0, 00:05:03.966 "r_mbytes_per_sec": 0, 00:05:03.966 "w_mbytes_per_sec": 0 00:05:03.966 }, 00:05:03.966 "claimed": true, 00:05:03.966 "claim_type": "exclusive_write", 00:05:03.966 "zoned": false, 00:05:03.966 "supported_io_types": { 00:05:03.966 "read": true, 00:05:03.966 "write": true, 00:05:03.966 "unmap": true, 00:05:03.966 "flush": true, 00:05:03.966 "reset": true, 00:05:03.966 "nvme_admin": false, 00:05:03.966 "nvme_io": false, 00:05:03.966 "nvme_io_md": false, 00:05:03.966 "write_zeroes": true, 00:05:03.966 "zcopy": true, 00:05:03.966 "get_zone_info": false, 00:05:03.966 "zone_management": false, 00:05:03.966 "zone_append": false, 00:05:03.966 "compare": false, 00:05:03.966 "compare_and_write": false, 00:05:03.966 "abort": true, 00:05:03.966 "seek_hole": false, 00:05:03.966 "seek_data": false, 00:05:03.966 "copy": true, 00:05:03.966 "nvme_iov_md": false 00:05:03.966 }, 00:05:03.966 "memory_domains": [ 00:05:03.966 { 00:05:03.966 "dma_device_id": "system", 00:05:03.966 "dma_device_type": 1 00:05:03.966 }, 00:05:03.966 { 00:05:03.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.966 "dma_device_type": 2 00:05:03.966 } 00:05:03.966 ], 00:05:03.966 "driver_specific": {} 00:05:03.966 }, 00:05:03.966 { 00:05:03.966 "name": "Passthru0", 00:05:03.966 "aliases": [ 00:05:03.966 "980669d2-072c-5a2d-875c-4b6b3d4b79b1" 00:05:03.966 ], 00:05:03.966 "product_name": "passthru", 00:05:03.966 "block_size": 512, 00:05:03.966 "num_blocks": 16384, 00:05:03.966 "uuid": "980669d2-072c-5a2d-875c-4b6b3d4b79b1", 00:05:03.966 "assigned_rate_limits": { 00:05:03.966 "rw_ios_per_sec": 0, 00:05:03.966 "rw_mbytes_per_sec": 0, 00:05:03.966 "r_mbytes_per_sec": 0, 00:05:03.966 "w_mbytes_per_sec": 0 00:05:03.966 }, 00:05:03.966 "claimed": false, 00:05:03.966 "zoned": false, 00:05:03.966 "supported_io_types": { 00:05:03.966 "read": true, 00:05:03.966 "write": true, 00:05:03.966 "unmap": true, 00:05:03.966 "flush": true, 00:05:03.966 "reset": true, 00:05:03.966 "nvme_admin": false, 00:05:03.966 "nvme_io": false, 00:05:03.966 "nvme_io_md": false, 00:05:03.966 "write_zeroes": true, 00:05:03.966 "zcopy": true, 00:05:03.966 "get_zone_info": false, 00:05:03.966 "zone_management": false, 00:05:03.966 "zone_append": false, 00:05:03.966 "compare": false, 00:05:03.966 "compare_and_write": false, 00:05:03.966 "abort": true, 00:05:03.966 "seek_hole": false, 00:05:03.966 "seek_data": false, 00:05:03.966 "copy": true, 00:05:03.966 "nvme_iov_md": false 00:05:03.966 }, 00:05:03.966 "memory_domains": [ 00:05:03.966 { 00:05:03.966 "dma_device_id": "system", 00:05:03.966 "dma_device_type": 1 00:05:03.966 }, 00:05:03.966 { 00:05:03.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.966 "dma_device_type": 2 00:05:03.966 } 00:05:03.966 ], 00:05:03.966 "driver_specific": { 00:05:03.966 "passthru": { 00:05:03.966 "name": "Passthru0", 00:05:03.966 "base_bdev_name": "Malloc2" 00:05:03.966 } 00:05:03.966 } 00:05:03.966 } 00:05:03.966 ]' 00:05:03.966 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.224 ************************************ 00:05:04.224 END TEST rpc_daemon_integrity 00:05:04.224 ************************************ 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.224 00:05:04.224 real 0m0.336s 00:05:04.224 user 0m0.209s 00:05:04.224 sys 0m0.045s 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.224 04:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.224 04:54:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.224 04:54:18 rpc -- rpc/rpc.sh@84 -- # killprocess 62027 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@948 -- # '[' -z 62027 ']' 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@952 -- # kill -0 62027 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@953 -- # uname 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62027 00:05:04.224 killing process with pid 62027 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62027' 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@967 -- # kill 62027 00:05:04.224 04:54:18 rpc -- common/autotest_common.sh@972 -- # wait 62027 00:05:06.125 00:05:06.125 real 0m4.340s 00:05:06.125 user 0m5.202s 00:05:06.125 sys 0m0.706s 00:05:06.125 ************************************ 00:05:06.125 END TEST rpc 00:05:06.125 ************************************ 00:05:06.125 04:54:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.125 04:54:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 04:54:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.125 04:54:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.125 04:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.125 04:54:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 ************************************ 00:05:06.125 START TEST skip_rpc 00:05:06.125 ************************************ 00:05:06.125 04:54:20 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.125 * Looking for test storage... 00:05:06.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.125 04:54:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.125 04:54:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.125 04:54:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:06.125 04:54:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.125 04:54:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.125 04:54:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 ************************************ 00:05:06.125 START TEST skip_rpc 00:05:06.125 ************************************ 00:05:06.125 04:54:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:06.125 04:54:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62237 00:05:06.125 04:54:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.125 04:54:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:06.125 04:54:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:06.383 [2024-07-24 04:54:20.842148] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:06.383 [2024-07-24 04:54:20.842339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62237 ] 00:05:06.383 [2024-07-24 04:54:21.011490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.642 [2024-07-24 04:54:21.166328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62237 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62237 ']' 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62237 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62237 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.909 killing process with pid 62237 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62237' 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62237 00:05:11.909 04:54:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62237 00:05:13.285 ************************************ 00:05:13.285 END TEST skip_rpc 00:05:13.285 ************************************ 00:05:13.285 00:05:13.285 real 0m6.794s 00:05:13.285 user 0m6.394s 00:05:13.285 sys 0m0.298s 00:05:13.285 04:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.285 04:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 04:54:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:13.286 04:54:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.286 04:54:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.286 04:54:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 ************************************ 00:05:13.286 START TEST skip_rpc_with_json 00:05:13.286 ************************************ 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62341 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62341 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62341 ']' 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.286 04:54:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.286 [2024-07-24 04:54:27.689086] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:13.286 [2024-07-24 04:54:27.689530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62341 ] 00:05:13.286 [2024-07-24 04:54:27.858881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.544 [2024-07-24 04:54:28.006270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.112 [2024-07-24 04:54:28.607834] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:14.112 request: 00:05:14.112 { 00:05:14.112 "trtype": "tcp", 00:05:14.112 "method": "nvmf_get_transports", 00:05:14.112 "req_id": 1 00:05:14.112 } 00:05:14.112 Got JSON-RPC error response 00:05:14.112 response: 00:05:14.112 { 00:05:14.112 "code": -19, 00:05:14.112 "message": "No such device" 00:05:14.112 } 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.112 [2024-07-24 04:54:28.619954] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.112 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.371 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.371 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.371 { 00:05:14.371 "subsystems": [ 00:05:14.371 { 00:05:14.371 "subsystem": "keyring", 00:05:14.371 "config": [] 00:05:14.371 }, 00:05:14.371 { 00:05:14.371 "subsystem": "iobuf", 00:05:14.371 "config": [ 00:05:14.371 { 00:05:14.371 "method": "iobuf_set_options", 00:05:14.371 "params": { 00:05:14.371 "small_pool_count": 8192, 00:05:14.371 "large_pool_count": 1024, 00:05:14.371 "small_bufsize": 8192, 00:05:14.371 "large_bufsize": 135168 00:05:14.371 } 00:05:14.371 } 00:05:14.371 ] 00:05:14.371 }, 00:05:14.371 { 00:05:14.371 "subsystem": "sock", 00:05:14.371 "config": [ 00:05:14.371 { 00:05:14.371 "method": "sock_set_default_impl", 00:05:14.371 "params": { 00:05:14.371 "impl_name": "posix" 00:05:14.371 } 00:05:14.371 }, 00:05:14.371 { 00:05:14.371 "method": "sock_impl_set_options", 00:05:14.371 "params": { 00:05:14.371 "impl_name": "ssl", 00:05:14.371 "recv_buf_size": 4096, 00:05:14.371 "send_buf_size": 4096, 00:05:14.371 "enable_recv_pipe": true, 00:05:14.371 "enable_quickack": false, 00:05:14.371 "enable_placement_id": 0, 00:05:14.371 "enable_zerocopy_send_server": true, 00:05:14.371 "enable_zerocopy_send_client": false, 00:05:14.371 "zerocopy_threshold": 0, 00:05:14.371 "tls_version": 0, 00:05:14.371 "enable_ktls": false 00:05:14.371 } 00:05:14.371 }, 00:05:14.371 { 00:05:14.371 "method": "sock_impl_set_options", 00:05:14.371 "params": { 00:05:14.371 "impl_name": "posix", 00:05:14.371 "recv_buf_size": 2097152, 00:05:14.371 "send_buf_size": 2097152, 00:05:14.371 "enable_recv_pipe": true, 00:05:14.371 "enable_quickack": false, 00:05:14.371 "enable_placement_id": 0, 00:05:14.371 "enable_zerocopy_send_server": true, 00:05:14.371 "enable_zerocopy_send_client": false, 00:05:14.371 "zerocopy_threshold": 0, 00:05:14.371 "tls_version": 0, 00:05:14.371 "enable_ktls": false 00:05:14.371 } 00:05:14.371 } 00:05:14.372 ] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "vmd", 00:05:14.372 "config": [] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "accel", 00:05:14.372 "config": [ 00:05:14.372 { 00:05:14.372 "method": "accel_set_options", 00:05:14.372 "params": { 00:05:14.372 "small_cache_size": 128, 00:05:14.372 "large_cache_size": 16, 00:05:14.372 "task_count": 2048, 00:05:14.372 "sequence_count": 2048, 00:05:14.372 "buf_count": 2048 00:05:14.372 } 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "bdev", 00:05:14.372 "config": [ 00:05:14.372 { 00:05:14.372 "method": "bdev_set_options", 00:05:14.372 "params": { 00:05:14.372 "bdev_io_pool_size": 65535, 00:05:14.372 "bdev_io_cache_size": 256, 00:05:14.372 "bdev_auto_examine": true, 00:05:14.372 "iobuf_small_cache_size": 128, 00:05:14.372 "iobuf_large_cache_size": 16 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "bdev_raid_set_options", 00:05:14.372 "params": { 00:05:14.372 "process_window_size_kb": 1024, 00:05:14.372 "process_max_bandwidth_mb_sec": 0 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "bdev_iscsi_set_options", 00:05:14.372 "params": { 00:05:14.372 "timeout_sec": 30 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "bdev_nvme_set_options", 00:05:14.372 "params": { 00:05:14.372 "action_on_timeout": "none", 00:05:14.372 "timeout_us": 0, 00:05:14.372 "timeout_admin_us": 0, 00:05:14.372 "keep_alive_timeout_ms": 10000, 00:05:14.372 "arbitration_burst": 0, 00:05:14.372 "low_priority_weight": 0, 00:05:14.372 "medium_priority_weight": 0, 00:05:14.372 "high_priority_weight": 0, 00:05:14.372 "nvme_adminq_poll_period_us": 10000, 00:05:14.372 "nvme_ioq_poll_period_us": 0, 00:05:14.372 "io_queue_requests": 0, 00:05:14.372 "delay_cmd_submit": true, 00:05:14.372 "transport_retry_count": 4, 00:05:14.372 "bdev_retry_count": 3, 00:05:14.372 "transport_ack_timeout": 0, 00:05:14.372 "ctrlr_loss_timeout_sec": 0, 00:05:14.372 "reconnect_delay_sec": 0, 00:05:14.372 "fast_io_fail_timeout_sec": 0, 00:05:14.372 "disable_auto_failback": false, 00:05:14.372 "generate_uuids": false, 00:05:14.372 "transport_tos": 0, 00:05:14.372 "nvme_error_stat": false, 00:05:14.372 "rdma_srq_size": 0, 00:05:14.372 "io_path_stat": false, 00:05:14.372 "allow_accel_sequence": false, 00:05:14.372 "rdma_max_cq_size": 0, 00:05:14.372 "rdma_cm_event_timeout_ms": 0, 00:05:14.372 "dhchap_digests": [ 00:05:14.372 "sha256", 00:05:14.372 "sha384", 00:05:14.372 "sha512" 00:05:14.372 ], 00:05:14.372 "dhchap_dhgroups": [ 00:05:14.372 "null", 00:05:14.372 "ffdhe2048", 00:05:14.372 "ffdhe3072", 00:05:14.372 "ffdhe4096", 00:05:14.372 "ffdhe6144", 00:05:14.372 "ffdhe8192" 00:05:14.372 ] 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "bdev_nvme_set_hotplug", 00:05:14.372 "params": { 00:05:14.372 "period_us": 100000, 00:05:14.372 "enable": false 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "bdev_wait_for_examine" 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "scsi", 00:05:14.372 "config": null 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "scheduler", 00:05:14.372 "config": [ 00:05:14.372 { 00:05:14.372 "method": "framework_set_scheduler", 00:05:14.372 "params": { 00:05:14.372 "name": "static" 00:05:14.372 } 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "vhost_scsi", 00:05:14.372 "config": [] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "vhost_blk", 00:05:14.372 "config": [] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "ublk", 00:05:14.372 "config": [] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "nbd", 00:05:14.372 "config": [] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "nvmf", 00:05:14.372 "config": [ 00:05:14.372 { 00:05:14.372 "method": "nvmf_set_config", 00:05:14.372 "params": { 00:05:14.372 "discovery_filter": "match_any", 00:05:14.372 "admin_cmd_passthru": { 00:05:14.372 "identify_ctrlr": false 00:05:14.372 } 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "nvmf_set_max_subsystems", 00:05:14.372 "params": { 00:05:14.372 "max_subsystems": 1024 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "nvmf_set_crdt", 00:05:14.372 "params": { 00:05:14.372 "crdt1": 0, 00:05:14.372 "crdt2": 0, 00:05:14.372 "crdt3": 0 00:05:14.372 } 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "method": "nvmf_create_transport", 00:05:14.372 "params": { 00:05:14.372 "trtype": "TCP", 00:05:14.372 "max_queue_depth": 128, 00:05:14.372 "max_io_qpairs_per_ctrlr": 127, 00:05:14.372 "in_capsule_data_size": 4096, 00:05:14.372 "max_io_size": 131072, 00:05:14.372 "io_unit_size": 131072, 00:05:14.372 "max_aq_depth": 128, 00:05:14.372 "num_shared_buffers": 511, 00:05:14.372 "buf_cache_size": 4294967295, 00:05:14.372 "dif_insert_or_strip": false, 00:05:14.372 "zcopy": false, 00:05:14.372 "c2h_success": true, 00:05:14.372 "sock_priority": 0, 00:05:14.372 "abort_timeout_sec": 1, 00:05:14.372 "ack_timeout": 0, 00:05:14.372 "data_wr_pool_size": 0 00:05:14.372 } 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 }, 00:05:14.372 { 00:05:14.372 "subsystem": "iscsi", 00:05:14.372 "config": [ 00:05:14.372 { 00:05:14.372 "method": "iscsi_set_options", 00:05:14.372 "params": { 00:05:14.372 "node_base": "iqn.2016-06.io.spdk", 00:05:14.372 "max_sessions": 128, 00:05:14.372 "max_connections_per_session": 2, 00:05:14.372 "max_queue_depth": 64, 00:05:14.372 "default_time2wait": 2, 00:05:14.372 "default_time2retain": 20, 00:05:14.372 "first_burst_length": 8192, 00:05:14.372 "immediate_data": true, 00:05:14.372 "allow_duplicated_isid": false, 00:05:14.372 "error_recovery_level": 0, 00:05:14.372 "nop_timeout": 60, 00:05:14.372 "nop_in_interval": 30, 00:05:14.372 "disable_chap": false, 00:05:14.372 "require_chap": false, 00:05:14.372 "mutual_chap": false, 00:05:14.372 "chap_group": 0, 00:05:14.372 "max_large_datain_per_connection": 64, 00:05:14.372 "max_r2t_per_connection": 4, 00:05:14.372 "pdu_pool_size": 36864, 00:05:14.372 "immediate_data_pool_size": 16384, 00:05:14.372 "data_out_pool_size": 2048 00:05:14.372 } 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 } 00:05:14.372 ] 00:05:14.372 } 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62341 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62341 ']' 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62341 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62341 00:05:14.372 killing process with pid 62341 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62341' 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62341 00:05:14.372 04:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62341 00:05:16.277 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62381 00:05:16.277 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.277 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62381 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62381 ']' 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62381 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62381 00:05:21.548 killing process with pid 62381 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62381' 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62381 00:05:21.548 04:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62381 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.925 00:05:22.925 real 0m9.737s 00:05:22.925 user 0m9.405s 00:05:22.925 sys 0m0.675s 00:05:22.925 ************************************ 00:05:22.925 END TEST skip_rpc_with_json 00:05:22.925 ************************************ 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.925 04:54:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.925 ************************************ 00:05:22.925 START TEST skip_rpc_with_delay 00:05:22.925 ************************************ 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 [2024-07-24 04:54:37.456754] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.925 [2024-07-24 04:54:37.456940] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.925 00:05:22.925 real 0m0.154s 00:05:22.925 user 0m0.088s 00:05:22.925 sys 0m0.064s 00:05:22.925 ************************************ 00:05:22.925 END TEST skip_rpc_with_delay 00:05:22.925 ************************************ 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.925 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.925 04:54:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.925 04:54:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.925 04:54:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.925 04:54:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.185 ************************************ 00:05:23.185 START TEST exit_on_failed_rpc_init 00:05:23.185 ************************************ 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62508 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62508 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62508 ']' 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.185 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.185 [2024-07-24 04:54:37.680695] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:23.185 [2024-07-24 04:54:37.681148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62508 ] 00:05:23.444 [2024-07-24 04:54:37.853062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.444 [2024-07-24 04:54:37.999529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.381 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.382 [2024-07-24 04:54:38.771773] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:24.382 [2024-07-24 04:54:38.771986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62526 ] 00:05:24.382 [2024-07-24 04:54:38.944755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.640 [2024-07-24 04:54:39.161444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.640 [2024-07-24 04:54:39.161611] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.640 [2024-07-24 04:54:39.161638] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.640 [2024-07-24 04:54:39.161655] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62508 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62508 ']' 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62508 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62508 00:05:25.207 killing process with pid 62508 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62508' 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62508 00:05:25.207 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62508 00:05:27.112 ************************************ 00:05:27.112 END TEST exit_on_failed_rpc_init 00:05:27.112 ************************************ 00:05:27.112 00:05:27.112 real 0m3.728s 00:05:27.112 user 0m4.495s 00:05:27.112 sys 0m0.479s 00:05:27.112 04:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.112 04:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 04:54:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.112 ************************************ 00:05:27.112 END TEST skip_rpc 00:05:27.112 ************************************ 00:05:27.112 00:05:27.112 real 0m20.714s 00:05:27.112 user 0m20.487s 00:05:27.112 sys 0m1.691s 00:05:27.112 04:54:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.112 04:54:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 04:54:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.112 04:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.112 04:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.112 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 ************************************ 00:05:27.112 START TEST rpc_client 00:05:27.112 ************************************ 00:05:27.112 04:54:41 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.112 * Looking for test storage... 00:05:27.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:27.112 04:54:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:27.112 OK 00:05:27.112 04:54:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.112 00:05:27.112 real 0m0.145s 00:05:27.112 user 0m0.062s 00:05:27.112 sys 0m0.086s 00:05:27.112 04:54:41 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.112 04:54:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 ************************************ 00:05:27.112 END TEST rpc_client 00:05:27.112 ************************************ 00:05:27.112 04:54:41 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:27.112 04:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.112 04:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.112 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.112 ************************************ 00:05:27.112 START TEST json_config 00:05:27.112 ************************************ 00:05:27.112 04:54:41 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:27.112 04:54:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:05:27.112 04:54:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.113 04:54:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.113 04:54:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.113 04:54:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.113 04:54:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.113 04:54:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.113 04:54:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.113 04:54:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.113 04:54:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@47 -- # : 0 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.113 04:54:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.113 WARNING: No tests are enabled so not running JSON configuration tests 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:27.113 04:54:41 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:27.113 ************************************ 00:05:27.113 END TEST json_config 00:05:27.113 ************************************ 00:05:27.113 00:05:27.113 real 0m0.079s 00:05:27.113 user 0m0.033s 00:05:27.113 sys 0m0.045s 00:05:27.113 04:54:41 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.113 04:54:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.113 04:54:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:27.113 04:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.113 04:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.113 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.113 ************************************ 00:05:27.113 START TEST json_config_extra_key 00:05:27.113 ************************************ 00:05:27.113 04:54:41 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:27.372 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1c6d3f82-85be-430a-8cc2-e7f7d95cebc9 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.372 04:54:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.372 04:54:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.372 04:54:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.372 04:54:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.372 04:54:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.372 04:54:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.373 04:54:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.373 04:54:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.373 04:54:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.373 04:54:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.373 INFO: launching applications... 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.373 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62707 00:05:27.373 Waiting for target to run... 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62707 /var/tmp/spdk_tgt.sock 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62707 ']' 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.373 04:54:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.373 04:54:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:27.373 [2024-07-24 04:54:41.899225] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:27.373 [2024-07-24 04:54:41.899400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62707 ] 00:05:27.633 [2024-07-24 04:54:42.238037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.892 [2024-07-24 04:54:42.405434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.460 04:54:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.460 00:05:28.460 04:54:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.460 INFO: shutting down applications... 00:05:28.460 04:54:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.460 04:54:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62707 ]] 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62707 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:05:28.460 04:54:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.027 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.027 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.027 04:54:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:05:29.027 04:54:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.592 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.592 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.592 04:54:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:05:29.592 04:54:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.850 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.850 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.850 04:54:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:05:29.850 04:54:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.415 SPDK target shutdown done 00:05:30.415 04:54:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.415 Success 00:05:30.415 04:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:30.415 ************************************ 00:05:30.415 END TEST json_config_extra_key 00:05:30.415 ************************************ 00:05:30.415 00:05:30.415 real 0m3.228s 00:05:30.415 user 0m3.171s 00:05:30.415 sys 0m0.455s 00:05:30.415 04:54:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.415 04:54:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.415 04:54:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.415 04:54:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.415 04:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.415 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.415 ************************************ 00:05:30.415 START TEST alias_rpc 00:05:30.415 ************************************ 00:05:30.415 04:54:44 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.673 * Looking for test storage... 00:05:30.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:30.673 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.673 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62792 00:05:30.673 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62792 00:05:30.673 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62792 ']' 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.673 04:54:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.673 [2024-07-24 04:54:45.194053] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:30.673 [2024-07-24 04:54:45.194241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62792 ] 00:05:30.932 [2024-07-24 04:54:45.360519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.932 [2024-07-24 04:54:45.518680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.499 04:54:46 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.499 04:54:46 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.499 04:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:32.065 04:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62792 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62792 ']' 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62792 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62792 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.065 killing process with pid 62792 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62792' 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@967 -- # kill 62792 00:05:32.065 04:54:46 alias_rpc -- common/autotest_common.sh@972 -- # wait 62792 00:05:34.004 00:05:34.004 real 0m3.196s 00:05:34.004 user 0m3.448s 00:05:34.004 sys 0m0.440s 00:05:34.004 04:54:48 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.004 04:54:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.004 ************************************ 00:05:34.004 END TEST alias_rpc 00:05:34.004 ************************************ 00:05:34.004 04:54:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:34.004 04:54:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.004 04:54:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.004 04:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.004 04:54:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.004 ************************************ 00:05:34.004 START TEST spdkcli_tcp 00:05:34.004 ************************************ 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.004 * Looking for test storage... 00:05:34.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62880 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62880 00:05:34.004 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 62880 ']' 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.004 04:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.004 [2024-07-24 04:54:48.441899] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:34.004 [2024-07-24 04:54:48.442086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62880 ] 00:05:34.004 [2024-07-24 04:54:48.612828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.263 [2024-07-24 04:54:48.770960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.263 [2024-07-24 04:54:48.770977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.830 04:54:49 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.830 04:54:49 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:34.830 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62897 00:05:34.830 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:34.830 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.090 [ 00:05:35.090 "bdev_malloc_delete", 00:05:35.090 "bdev_malloc_create", 00:05:35.090 "bdev_null_resize", 00:05:35.090 "bdev_null_delete", 00:05:35.090 "bdev_null_create", 00:05:35.090 "bdev_nvme_cuse_unregister", 00:05:35.090 "bdev_nvme_cuse_register", 00:05:35.090 "bdev_opal_new_user", 00:05:35.090 "bdev_opal_set_lock_state", 00:05:35.090 "bdev_opal_delete", 00:05:35.090 "bdev_opal_get_info", 00:05:35.090 "bdev_opal_create", 00:05:35.090 "bdev_nvme_opal_revert", 00:05:35.090 "bdev_nvme_opal_init", 00:05:35.090 "bdev_nvme_send_cmd", 00:05:35.090 "bdev_nvme_get_path_iostat", 00:05:35.090 "bdev_nvme_get_mdns_discovery_info", 00:05:35.090 "bdev_nvme_stop_mdns_discovery", 00:05:35.090 "bdev_nvme_start_mdns_discovery", 00:05:35.090 "bdev_nvme_set_multipath_policy", 00:05:35.090 "bdev_nvme_set_preferred_path", 00:05:35.090 "bdev_nvme_get_io_paths", 00:05:35.090 "bdev_nvme_remove_error_injection", 00:05:35.090 "bdev_nvme_add_error_injection", 00:05:35.090 "bdev_nvme_get_discovery_info", 00:05:35.090 "bdev_nvme_stop_discovery", 00:05:35.090 "bdev_nvme_start_discovery", 00:05:35.090 "bdev_nvme_get_controller_health_info", 00:05:35.090 "bdev_nvme_disable_controller", 00:05:35.090 "bdev_nvme_enable_controller", 00:05:35.090 "bdev_nvme_reset_controller", 00:05:35.090 "bdev_nvme_get_transport_statistics", 00:05:35.090 "bdev_nvme_apply_firmware", 00:05:35.090 "bdev_nvme_detach_controller", 00:05:35.090 "bdev_nvme_get_controllers", 00:05:35.090 "bdev_nvme_attach_controller", 00:05:35.090 "bdev_nvme_set_hotplug", 00:05:35.090 "bdev_nvme_set_options", 00:05:35.090 "bdev_passthru_delete", 00:05:35.090 "bdev_passthru_create", 00:05:35.090 "bdev_lvol_set_parent_bdev", 00:05:35.090 "bdev_lvol_set_parent", 00:05:35.090 "bdev_lvol_check_shallow_copy", 00:05:35.090 "bdev_lvol_start_shallow_copy", 00:05:35.090 "bdev_lvol_grow_lvstore", 00:05:35.090 "bdev_lvol_get_lvols", 00:05:35.090 "bdev_lvol_get_lvstores", 00:05:35.090 "bdev_lvol_delete", 00:05:35.090 "bdev_lvol_set_read_only", 00:05:35.090 "bdev_lvol_resize", 00:05:35.090 "bdev_lvol_decouple_parent", 00:05:35.090 "bdev_lvol_inflate", 00:05:35.090 "bdev_lvol_rename", 00:05:35.090 "bdev_lvol_clone_bdev", 00:05:35.090 "bdev_lvol_clone", 00:05:35.090 "bdev_lvol_snapshot", 00:05:35.090 "bdev_lvol_create", 00:05:35.090 "bdev_lvol_delete_lvstore", 00:05:35.090 "bdev_lvol_rename_lvstore", 00:05:35.090 "bdev_lvol_create_lvstore", 00:05:35.090 "bdev_raid_set_options", 00:05:35.090 "bdev_raid_remove_base_bdev", 00:05:35.090 "bdev_raid_add_base_bdev", 00:05:35.090 "bdev_raid_delete", 00:05:35.090 "bdev_raid_create", 00:05:35.090 "bdev_raid_get_bdevs", 00:05:35.090 "bdev_error_inject_error", 00:05:35.090 "bdev_error_delete", 00:05:35.090 "bdev_error_create", 00:05:35.090 "bdev_split_delete", 00:05:35.090 "bdev_split_create", 00:05:35.090 "bdev_delay_delete", 00:05:35.090 "bdev_delay_create", 00:05:35.090 "bdev_delay_update_latency", 00:05:35.090 "bdev_zone_block_delete", 00:05:35.090 "bdev_zone_block_create", 00:05:35.090 "blobfs_create", 00:05:35.090 "blobfs_detect", 00:05:35.090 "blobfs_set_cache_size", 00:05:35.090 "bdev_xnvme_delete", 00:05:35.090 "bdev_xnvme_create", 00:05:35.090 "bdev_aio_delete", 00:05:35.090 "bdev_aio_rescan", 00:05:35.090 "bdev_aio_create", 00:05:35.090 "bdev_ftl_set_property", 00:05:35.090 "bdev_ftl_get_properties", 00:05:35.090 "bdev_ftl_get_stats", 00:05:35.090 "bdev_ftl_unmap", 00:05:35.090 "bdev_ftl_unload", 00:05:35.090 "bdev_ftl_delete", 00:05:35.090 "bdev_ftl_load", 00:05:35.090 "bdev_ftl_create", 00:05:35.090 "bdev_virtio_attach_controller", 00:05:35.090 "bdev_virtio_scsi_get_devices", 00:05:35.090 "bdev_virtio_detach_controller", 00:05:35.090 "bdev_virtio_blk_set_hotplug", 00:05:35.090 "bdev_iscsi_delete", 00:05:35.090 "bdev_iscsi_create", 00:05:35.090 "bdev_iscsi_set_options", 00:05:35.090 "accel_error_inject_error", 00:05:35.090 "ioat_scan_accel_module", 00:05:35.090 "dsa_scan_accel_module", 00:05:35.090 "iaa_scan_accel_module", 00:05:35.090 "keyring_file_remove_key", 00:05:35.090 "keyring_file_add_key", 00:05:35.090 "keyring_linux_set_options", 00:05:35.090 "iscsi_get_histogram", 00:05:35.090 "iscsi_enable_histogram", 00:05:35.090 "iscsi_set_options", 00:05:35.090 "iscsi_get_auth_groups", 00:05:35.090 "iscsi_auth_group_remove_secret", 00:05:35.090 "iscsi_auth_group_add_secret", 00:05:35.090 "iscsi_delete_auth_group", 00:05:35.090 "iscsi_create_auth_group", 00:05:35.090 "iscsi_set_discovery_auth", 00:05:35.090 "iscsi_get_options", 00:05:35.090 "iscsi_target_node_request_logout", 00:05:35.090 "iscsi_target_node_set_redirect", 00:05:35.091 "iscsi_target_node_set_auth", 00:05:35.091 "iscsi_target_node_add_lun", 00:05:35.091 "iscsi_get_stats", 00:05:35.091 "iscsi_get_connections", 00:05:35.091 "iscsi_portal_group_set_auth", 00:05:35.091 "iscsi_start_portal_group", 00:05:35.091 "iscsi_delete_portal_group", 00:05:35.091 "iscsi_create_portal_group", 00:05:35.091 "iscsi_get_portal_groups", 00:05:35.091 "iscsi_delete_target_node", 00:05:35.091 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.091 "iscsi_target_node_add_pg_ig_maps", 00:05:35.091 "iscsi_create_target_node", 00:05:35.091 "iscsi_get_target_nodes", 00:05:35.091 "iscsi_delete_initiator_group", 00:05:35.091 "iscsi_initiator_group_remove_initiators", 00:05:35.091 "iscsi_initiator_group_add_initiators", 00:05:35.091 "iscsi_create_initiator_group", 00:05:35.091 "iscsi_get_initiator_groups", 00:05:35.091 "nvmf_set_crdt", 00:05:35.091 "nvmf_set_config", 00:05:35.091 "nvmf_set_max_subsystems", 00:05:35.091 "nvmf_stop_mdns_prr", 00:05:35.091 "nvmf_publish_mdns_prr", 00:05:35.091 "nvmf_subsystem_get_listeners", 00:05:35.091 "nvmf_subsystem_get_qpairs", 00:05:35.091 "nvmf_subsystem_get_controllers", 00:05:35.091 "nvmf_get_stats", 00:05:35.091 "nvmf_get_transports", 00:05:35.091 "nvmf_create_transport", 00:05:35.091 "nvmf_get_targets", 00:05:35.091 "nvmf_delete_target", 00:05:35.091 "nvmf_create_target", 00:05:35.091 "nvmf_subsystem_allow_any_host", 00:05:35.091 "nvmf_subsystem_remove_host", 00:05:35.091 "nvmf_subsystem_add_host", 00:05:35.091 "nvmf_ns_remove_host", 00:05:35.091 "nvmf_ns_add_host", 00:05:35.091 "nvmf_subsystem_remove_ns", 00:05:35.091 "nvmf_subsystem_add_ns", 00:05:35.091 "nvmf_subsystem_listener_set_ana_state", 00:05:35.091 "nvmf_discovery_get_referrals", 00:05:35.091 "nvmf_discovery_remove_referral", 00:05:35.091 "nvmf_discovery_add_referral", 00:05:35.091 "nvmf_subsystem_remove_listener", 00:05:35.091 "nvmf_subsystem_add_listener", 00:05:35.091 "nvmf_delete_subsystem", 00:05:35.091 "nvmf_create_subsystem", 00:05:35.091 "nvmf_get_subsystems", 00:05:35.091 "env_dpdk_get_mem_stats", 00:05:35.091 "nbd_get_disks", 00:05:35.091 "nbd_stop_disk", 00:05:35.091 "nbd_start_disk", 00:05:35.091 "ublk_recover_disk", 00:05:35.091 "ublk_get_disks", 00:05:35.091 "ublk_stop_disk", 00:05:35.091 "ublk_start_disk", 00:05:35.091 "ublk_destroy_target", 00:05:35.091 "ublk_create_target", 00:05:35.091 "virtio_blk_create_transport", 00:05:35.091 "virtio_blk_get_transports", 00:05:35.091 "vhost_controller_set_coalescing", 00:05:35.091 "vhost_get_controllers", 00:05:35.091 "vhost_delete_controller", 00:05:35.091 "vhost_create_blk_controller", 00:05:35.091 "vhost_scsi_controller_remove_target", 00:05:35.091 "vhost_scsi_controller_add_target", 00:05:35.091 "vhost_start_scsi_controller", 00:05:35.091 "vhost_create_scsi_controller", 00:05:35.091 "thread_set_cpumask", 00:05:35.091 "framework_get_governor", 00:05:35.091 "framework_get_scheduler", 00:05:35.091 "framework_set_scheduler", 00:05:35.091 "framework_get_reactors", 00:05:35.091 "thread_get_io_channels", 00:05:35.091 "thread_get_pollers", 00:05:35.091 "thread_get_stats", 00:05:35.091 "framework_monitor_context_switch", 00:05:35.091 "spdk_kill_instance", 00:05:35.091 "log_enable_timestamps", 00:05:35.091 "log_get_flags", 00:05:35.091 "log_clear_flag", 00:05:35.091 "log_set_flag", 00:05:35.091 "log_get_level", 00:05:35.091 "log_set_level", 00:05:35.091 "log_get_print_level", 00:05:35.091 "log_set_print_level", 00:05:35.091 "framework_enable_cpumask_locks", 00:05:35.091 "framework_disable_cpumask_locks", 00:05:35.091 "framework_wait_init", 00:05:35.091 "framework_start_init", 00:05:35.091 "scsi_get_devices", 00:05:35.091 "bdev_get_histogram", 00:05:35.091 "bdev_enable_histogram", 00:05:35.091 "bdev_set_qos_limit", 00:05:35.091 "bdev_set_qd_sampling_period", 00:05:35.091 "bdev_get_bdevs", 00:05:35.091 "bdev_reset_iostat", 00:05:35.091 "bdev_get_iostat", 00:05:35.091 "bdev_examine", 00:05:35.091 "bdev_wait_for_examine", 00:05:35.091 "bdev_set_options", 00:05:35.091 "notify_get_notifications", 00:05:35.091 "notify_get_types", 00:05:35.091 "accel_get_stats", 00:05:35.091 "accel_set_options", 00:05:35.091 "accel_set_driver", 00:05:35.091 "accel_crypto_key_destroy", 00:05:35.091 "accel_crypto_keys_get", 00:05:35.091 "accel_crypto_key_create", 00:05:35.091 "accel_assign_opc", 00:05:35.091 "accel_get_module_info", 00:05:35.091 "accel_get_opc_assignments", 00:05:35.091 "vmd_rescan", 00:05:35.091 "vmd_remove_device", 00:05:35.091 "vmd_enable", 00:05:35.091 "sock_get_default_impl", 00:05:35.091 "sock_set_default_impl", 00:05:35.091 "sock_impl_set_options", 00:05:35.091 "sock_impl_get_options", 00:05:35.091 "iobuf_get_stats", 00:05:35.091 "iobuf_set_options", 00:05:35.091 "framework_get_pci_devices", 00:05:35.091 "framework_get_config", 00:05:35.091 "framework_get_subsystems", 00:05:35.091 "trace_get_info", 00:05:35.091 "trace_get_tpoint_group_mask", 00:05:35.091 "trace_disable_tpoint_group", 00:05:35.091 "trace_enable_tpoint_group", 00:05:35.091 "trace_clear_tpoint_mask", 00:05:35.091 "trace_set_tpoint_mask", 00:05:35.091 "keyring_get_keys", 00:05:35.091 "spdk_get_version", 00:05:35.091 "rpc_get_methods" 00:05:35.091 ] 00:05:35.091 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.091 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.091 04:54:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62880 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 62880 ']' 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 62880 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.091 04:54:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62880 00:05:35.350 04:54:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.350 04:54:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.350 killing process with pid 62880 00:05:35.350 04:54:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62880' 00:05:35.350 04:54:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 62880 00:05:35.350 04:54:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 62880 00:05:37.256 00:05:37.256 real 0m3.361s 00:05:37.256 user 0m6.024s 00:05:37.256 sys 0m0.483s 00:05:37.256 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.256 04:54:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.256 ************************************ 00:05:37.256 END TEST spdkcli_tcp 00:05:37.256 ************************************ 00:05:37.256 04:54:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.256 04:54:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.256 04:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.256 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.256 ************************************ 00:05:37.256 START TEST dpdk_mem_utility 00:05:37.256 ************************************ 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.256 * Looking for test storage... 00:05:37.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:37.256 04:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.256 04:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62983 00:05:37.256 04:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62983 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62983 ']' 00:05:37.256 04:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.256 04:54:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.256 [2024-07-24 04:54:51.854773] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:37.256 [2024-07-24 04:54:51.854983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62983 ] 00:05:37.515 [2024-07-24 04:54:52.027034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.774 [2024-07-24 04:54:52.180610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.344 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.344 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:38.344 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.344 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.344 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.344 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.344 { 00:05:38.344 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.344 } 00:05:38.344 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.344 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.344 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:38.344 1 heaps totaling size 820.000000 MiB 00:05:38.344 size: 820.000000 MiB heap id: 0 00:05:38.344 end heaps---------- 00:05:38.344 8 mempools totaling size 598.116089 MiB 00:05:38.344 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.344 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.344 size: 84.521057 MiB name: bdev_io_62983 00:05:38.344 size: 51.011292 MiB name: evtpool_62983 00:05:38.344 size: 50.003479 MiB name: msgpool_62983 00:05:38.344 size: 21.763794 MiB name: PDU_Pool 00:05:38.344 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.344 size: 0.026123 MiB name: Session_Pool 00:05:38.344 end mempools------- 00:05:38.344 6 memzones totaling size 4.142822 MiB 00:05:38.344 size: 1.000366 MiB name: RG_ring_0_62983 00:05:38.344 size: 1.000366 MiB name: RG_ring_1_62983 00:05:38.344 size: 1.000366 MiB name: RG_ring_4_62983 00:05:38.344 size: 1.000366 MiB name: RG_ring_5_62983 00:05:38.344 size: 0.125366 MiB name: RG_ring_2_62983 00:05:38.344 size: 0.015991 MiB name: RG_ring_3_62983 00:05:38.344 end memzones------- 00:05:38.345 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.345 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:05:38.345 list of free elements. size: 18.451294 MiB 00:05:38.345 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:38.345 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:38.345 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:38.345 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:38.345 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:38.345 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:38.345 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:38.345 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:38.345 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:38.345 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:38.345 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:38.345 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:38.345 element at address: 0x20001b000000 with size: 0.564148 MiB 00:05:38.345 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:38.345 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:38.345 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:38.345 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:38.345 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:38.345 list of standard malloc elements. size: 199.284302 MiB 00:05:38.345 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:38.345 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:38.345 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:38.345 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:38.345 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:38.345 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:38.345 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:38.345 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:38.345 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:38.345 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:38.345 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:38.345 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:38.345 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:38.345 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:38.346 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:38.346 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:38.346 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:38.346 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:38.346 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:38.347 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:38.347 list of memzone associated elements. size: 602.264404 MiB 00:05:38.347 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:38.347 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.347 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:38.347 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.347 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:38.347 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62983_0 00:05:38.347 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:38.347 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62983_0 00:05:38.347 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:38.347 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62983_0 00:05:38.347 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:38.347 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.347 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:38.347 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.347 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:38.347 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62983 00:05:38.347 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:38.347 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62983 00:05:38.347 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:38.347 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62983 00:05:38.347 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:38.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.347 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:38.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.347 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:38.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.347 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:38.347 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.347 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:38.347 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62983 00:05:38.347 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:38.347 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62983 00:05:38.347 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:38.347 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62983 00:05:38.347 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:38.347 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62983 00:05:38.347 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:38.347 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62983 00:05:38.347 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:38.347 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.347 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:38.347 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.347 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:38.347 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.347 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:38.347 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62983 00:05:38.347 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:38.347 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.347 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:38.347 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.347 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:38.347 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62983 00:05:38.347 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:38.347 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.347 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:38.347 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62983 00:05:38.347 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:38.347 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62983 00:05:38.347 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:38.347 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.347 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.347 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62983 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62983 ']' 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62983 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62983 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.347 killing process with pid 62983 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62983' 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62983 00:05:38.347 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62983 00:05:40.257 00:05:40.257 real 0m2.986s 00:05:40.257 user 0m3.086s 00:05:40.257 sys 0m0.434s 00:05:40.257 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.257 ************************************ 00:05:40.257 END TEST dpdk_mem_utility 00:05:40.257 ************************************ 00:05:40.257 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.257 04:54:54 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.257 04:54:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.257 04:54:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.257 04:54:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.257 ************************************ 00:05:40.257 START TEST event 00:05:40.257 ************************************ 00:05:40.257 04:54:54 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.257 * Looking for test storage... 00:05:40.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.257 04:54:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.257 04:54:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.257 04:54:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.257 04:54:54 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.257 04:54:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.257 04:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.257 ************************************ 00:05:40.257 START TEST event_perf 00:05:40.257 ************************************ 00:05:40.257 04:54:54 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.257 Running I/O for 1 seconds...[2024-07-24 04:54:54.831747] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:40.257 [2024-07-24 04:54:54.831944] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63072 ] 00:05:40.516 [2024-07-24 04:54:55.004526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.776 [2024-07-24 04:54:55.175644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.776 Running I/O for 1 seconds...[2024-07-24 04:54:55.175809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.776 [2024-07-24 04:54:55.175951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.776 [2024-07-24 04:54:55.176141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.153 00:05:42.153 lcore 0: 199015 00:05:42.153 lcore 1: 199013 00:05:42.153 lcore 2: 199014 00:05:42.153 lcore 3: 199016 00:05:42.153 done. 00:05:42.153 00:05:42.153 real 0m1.729s 00:05:42.153 user 0m4.494s 00:05:42.153 sys 0m0.111s 00:05:42.153 ************************************ 00:05:42.153 END TEST event_perf 00:05:42.153 ************************************ 00:05:42.153 04:54:56 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.153 04:54:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.153 04:54:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.153 04:54:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:42.153 04:54:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.153 04:54:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.153 ************************************ 00:05:42.153 START TEST event_reactor 00:05:42.153 ************************************ 00:05:42.153 04:54:56 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.153 [2024-07-24 04:54:56.603823] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:42.153 [2024-07-24 04:54:56.604003] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63117 ] 00:05:42.153 [2024-07-24 04:54:56.758050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.413 [2024-07-24 04:54:56.915027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.790 test_start 00:05:43.790 oneshot 00:05:43.790 tick 100 00:05:43.790 tick 100 00:05:43.790 tick 250 00:05:43.790 tick 100 00:05:43.790 tick 100 00:05:43.790 tick 250 00:05:43.790 tick 500 00:05:43.790 tick 100 00:05:43.790 tick 100 00:05:43.790 tick 100 00:05:43.790 tick 250 00:05:43.790 tick 100 00:05:43.790 tick 100 00:05:43.790 test_end 00:05:43.790 00:05:43.790 real 0m1.683s 00:05:43.790 user 0m1.497s 00:05:43.790 sys 0m0.078s 00:05:43.790 ************************************ 00:05:43.790 END TEST event_reactor 00:05:43.790 ************************************ 00:05:43.790 04:54:58 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.790 04:54:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 04:54:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.790 04:54:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:43.790 04:54:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.790 04:54:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 ************************************ 00:05:43.790 START TEST event_reactor_perf 00:05:43.790 ************************************ 00:05:43.790 04:54:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.790 [2024-07-24 04:54:58.344816] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:43.790 [2024-07-24 04:54:58.344993] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63154 ] 00:05:44.050 [2024-07-24 04:54:58.519394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.050 [2024-07-24 04:54:58.677636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.427 test_start 00:05:45.427 test_end 00:05:45.427 Performance: 335204 events per second 00:05:45.427 ************************************ 00:05:45.427 END TEST event_reactor_perf 00:05:45.427 ************************************ 00:05:45.427 00:05:45.427 real 0m1.702s 00:05:45.427 user 0m1.495s 00:05:45.427 sys 0m0.097s 00:05:45.427 04:55:00 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.427 04:55:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.427 04:55:00 event -- event/event.sh@49 -- # uname -s 00:05:45.427 04:55:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.427 04:55:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.427 04:55:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.427 04:55:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.427 04:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.686 ************************************ 00:05:45.686 START TEST event_scheduler 00:05:45.686 ************************************ 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.686 * Looking for test storage... 00:05:45.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:45.686 04:55:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.686 04:55:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63217 00:05:45.686 04:55:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.686 04:55:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.686 04:55:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63217 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63217 ']' 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.686 04:55:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.686 [2024-07-24 04:55:00.236142] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:45.687 [2024-07-24 04:55:00.236343] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63217 ] 00:05:45.946 [2024-07-24 04:55:00.410611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.205 [2024-07-24 04:55:00.640833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.205 [2024-07-24 04:55:00.640981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.205 [2024-07-24 04:55:00.641098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.205 [2024-07-24 04:55:00.641276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:46.772 04:55:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.772 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.772 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.772 POWER: Cannot set governor of lcore 0 to performance 00:05:46.772 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.772 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.772 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.772 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.772 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:46.772 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:46.772 POWER: Unable to set Power Management Environment for lcore 0 00:05:46.772 [2024-07-24 04:55:01.151641] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:46.772 [2024-07-24 04:55:01.151675] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:46.772 [2024-07-24 04:55:01.151691] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.772 [2024-07-24 04:55:01.151714] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.772 [2024-07-24 04:55:01.151729] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.772 [2024-07-24 04:55:01.151740] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.772 04:55:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 [2024-07-24 04:55:01.389814] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.772 04:55:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.772 04:55:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.031 ************************************ 00:05:47.031 START TEST scheduler_create_thread 00:05:47.031 ************************************ 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.031 2 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.031 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 3 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 4 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 5 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 6 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 7 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 8 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 9 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 10 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.032 04:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.968 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.968 04:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:47.969 04:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:47.969 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.969 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.346 ************************************ 00:05:49.346 END TEST scheduler_create_thread 00:05:49.346 ************************************ 00:05:49.346 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.346 00:05:49.346 real 0m2.140s 00:05:49.346 user 0m0.018s 00:05:49.346 sys 0m0.007s 00:05:49.346 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.346 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.346 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.346 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63217 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63217 ']' 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63217 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63217 00:05:49.346 killing process with pid 63217 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63217' 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63217 00:05:49.346 04:55:03 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63217 00:05:49.604 [2024-07-24 04:55:04.022756] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.537 00:05:50.537 real 0m4.966s 00:05:50.537 user 0m8.190s 00:05:50.537 sys 0m0.418s 00:05:50.537 04:55:05 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.537 ************************************ 00:05:50.537 END TEST event_scheduler 00:05:50.537 ************************************ 00:05:50.537 04:55:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 04:55:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.537 04:55:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.537 04:55:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.537 04:55:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.537 04:55:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 ************************************ 00:05:50.537 START TEST app_repeat 00:05:50.537 ************************************ 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63322 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63322' 00:05:50.537 Process app_repeat pid: 63322 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.537 spdk_app_start Round 0 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.537 04:55:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63322 /var/tmp/spdk-nbd.sock 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.537 04:55:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 [2024-07-24 04:55:05.145500] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:50.537 [2024-07-24 04:55:05.145714] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:05:50.795 [2024-07-24 04:55:05.317502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.053 [2024-07-24 04:55:05.472625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.053 [2024-07-24 04:55:05.472634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.618 04:55:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.618 04:55:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.618 04:55:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.876 Malloc0 00:05:51.876 04:55:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.134 Malloc1 00:05:52.134 04:55:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.134 04:55:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.393 /dev/nbd0 00:05:52.393 04:55:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.393 04:55:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.393 1+0 records in 00:05:52.393 1+0 records out 00:05:52.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269958 s, 15.2 MB/s 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.393 04:55:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.393 04:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.393 04:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.393 04:55:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.651 /dev/nbd1 00:05:52.651 04:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.651 04:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.651 1+0 records in 00:05:52.651 1+0 records out 00:05:52.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375637 s, 10.9 MB/s 00:05:52.651 04:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.652 04:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.652 04:55:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.652 04:55:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.652 04:55:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.652 04:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.652 04:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.652 04:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.652 04:55:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.652 04:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.909 { 00:05:52.909 "nbd_device": "/dev/nbd0", 00:05:52.909 "bdev_name": "Malloc0" 00:05:52.909 }, 00:05:52.909 { 00:05:52.909 "nbd_device": "/dev/nbd1", 00:05:52.909 "bdev_name": "Malloc1" 00:05:52.909 } 00:05:52.909 ]' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.909 { 00:05:52.909 "nbd_device": "/dev/nbd0", 00:05:52.909 "bdev_name": "Malloc0" 00:05:52.909 }, 00:05:52.909 { 00:05:52.909 "nbd_device": "/dev/nbd1", 00:05:52.909 "bdev_name": "Malloc1" 00:05:52.909 } 00:05:52.909 ]' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.909 /dev/nbd1' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.909 /dev/nbd1' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.909 256+0 records in 00:05:52.909 256+0 records out 00:05:52.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104944 s, 99.9 MB/s 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.909 04:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.910 256+0 records in 00:05:52.910 256+0 records out 00:05:52.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274231 s, 38.2 MB/s 00:05:52.910 04:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.910 04:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.168 256+0 records in 00:05:53.168 256+0 records out 00:05:53.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309897 s, 33.8 MB/s 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.168 04:55:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.426 04:55:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.426 04:55:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.426 04:55:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.426 04:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.684 04:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.684 04:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.684 04:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.942 04:55:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.942 04:55:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.201 04:55:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.136 [2024-07-24 04:55:09.703172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.395 [2024-07-24 04:55:09.847313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.395 [2024-07-24 04:55:09.847318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.395 [2024-07-24 04:55:09.994774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.395 [2024-07-24 04:55:09.994906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.296 spdk_app_start Round 1 00:05:57.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.296 04:55:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.296 04:55:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.296 04:55:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63322 /var/tmp/spdk-nbd.sock 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.296 04:55:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.554 04:55:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.554 04:55:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.554 04:55:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.813 Malloc0 00:05:57.813 04:55:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.071 Malloc1 00:05:58.071 04:55:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.071 04:55:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.330 /dev/nbd0 00:05:58.330 04:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.330 04:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.330 1+0 records in 00:05:58.330 1+0 records out 00:05:58.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196138 s, 20.9 MB/s 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.330 04:55:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.330 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.330 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.330 04:55:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.330 /dev/nbd1 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.589 1+0 records in 00:05:58.589 1+0 records out 00:05:58.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296914 s, 13.8 MB/s 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.589 04:55:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.589 04:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.589 04:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.589 { 00:05:58.589 "nbd_device": "/dev/nbd0", 00:05:58.589 "bdev_name": "Malloc0" 00:05:58.589 }, 00:05:58.589 { 00:05:58.589 "nbd_device": "/dev/nbd1", 00:05:58.589 "bdev_name": "Malloc1" 00:05:58.589 } 00:05:58.589 ]' 00:05:58.589 04:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.589 { 00:05:58.589 "nbd_device": "/dev/nbd0", 00:05:58.589 "bdev_name": "Malloc0" 00:05:58.589 }, 00:05:58.589 { 00:05:58.589 "nbd_device": "/dev/nbd1", 00:05:58.589 "bdev_name": "Malloc1" 00:05:58.589 } 00:05:58.589 ]' 00:05:58.589 04:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.850 /dev/nbd1' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.850 /dev/nbd1' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.850 256+0 records in 00:05:58.850 256+0 records out 00:05:58.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104622 s, 100 MB/s 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.850 256+0 records in 00:05:58.850 256+0 records out 00:05:58.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025315 s, 41.4 MB/s 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.850 256+0 records in 00:05:58.850 256+0 records out 00:05:58.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324335 s, 32.3 MB/s 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.850 04:55:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.851 04:55:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.109 04:55:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.378 04:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.650 04:55:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.650 04:55:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.909 04:55:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.283 [2024-07-24 04:55:15.487561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.283 [2024-07-24 04:55:15.630159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.283 [2024-07-24 04:55:15.630159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.283 [2024-07-24 04:55:15.775382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.283 [2024-07-24 04:55:15.775473] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.185 spdk_app_start Round 2 00:06:03.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.185 04:55:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.185 04:55:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.185 04:55:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63322 /var/tmp/spdk-nbd.sock 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.185 04:55:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:03.185 04:55:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.444 Malloc0 00:06:03.444 04:55:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.012 Malloc1 00:06:04.012 04:55:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.012 /dev/nbd0 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.012 04:55:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.012 04:55:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.271 1+0 records in 00:06:04.271 1+0 records out 00:06:04.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552679 s, 7.4 MB/s 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.271 /dev/nbd1 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.271 1+0 records in 00:06:04.271 1+0 records out 00:06:04.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371841 s, 11.0 MB/s 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.271 04:55:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.271 04:55:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.530 04:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.530 { 00:06:04.530 "nbd_device": "/dev/nbd0", 00:06:04.530 "bdev_name": "Malloc0" 00:06:04.530 }, 00:06:04.530 { 00:06:04.530 "nbd_device": "/dev/nbd1", 00:06:04.530 "bdev_name": "Malloc1" 00:06:04.530 } 00:06:04.530 ]' 00:06:04.530 04:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.530 { 00:06:04.530 "nbd_device": "/dev/nbd0", 00:06:04.530 "bdev_name": "Malloc0" 00:06:04.530 }, 00:06:04.530 { 00:06:04.530 "nbd_device": "/dev/nbd1", 00:06:04.530 "bdev_name": "Malloc1" 00:06:04.530 } 00:06:04.530 ]' 00:06:04.530 04:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.530 04:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.530 /dev/nbd1' 00:06:04.789 04:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.790 /dev/nbd1' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.790 256+0 records in 00:06:04.790 256+0 records out 00:06:04.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104383 s, 100 MB/s 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.790 256+0 records in 00:06:04.790 256+0 records out 00:06:04.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284503 s, 36.9 MB/s 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.790 256+0 records in 00:06:04.790 256+0 records out 00:06:04.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307689 s, 34.1 MB/s 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.790 04:55:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.049 04:55:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.308 04:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.566 04:55:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.566 04:55:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.132 04:55:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.066 [2024-07-24 04:55:21.543553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.067 [2024-07-24 04:55:21.692651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.067 [2024-07-24 04:55:21.692674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.325 [2024-07-24 04:55:21.834653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.325 [2024-07-24 04:55:21.834753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.228 04:55:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63322 /var/tmp/spdk-nbd.sock 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.228 04:55:23 event.app_repeat -- event/event.sh@39 -- # killprocess 63322 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63322 ']' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63322 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63322 00:06:09.228 killing process with pid 63322 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63322' 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63322 00:06:09.228 04:55:23 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63322 00:06:10.163 spdk_app_start is called in Round 0. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 1. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 2. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 3. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.164 04:55:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.164 04:55:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.164 00:06:10.164 real 0m19.672s 00:06:10.164 user 0m42.503s 00:06:10.164 sys 0m2.486s 00:06:10.164 04:55:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.164 04:55:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.164 ************************************ 00:06:10.164 END TEST app_repeat 00:06:10.164 ************************************ 00:06:10.422 04:55:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.422 04:55:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.422 04:55:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.422 04:55:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.422 04:55:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.422 ************************************ 00:06:10.422 START TEST cpu_locks 00:06:10.422 ************************************ 00:06:10.422 04:55:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.422 * Looking for test storage... 00:06:10.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.422 04:55:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.422 04:55:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.422 04:55:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.422 04:55:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.422 04:55:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.422 04:55:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.422 04:55:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.422 ************************************ 00:06:10.422 START TEST default_locks 00:06:10.422 ************************************ 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63769 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63769 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63769 ']' 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.422 04:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.422 [2024-07-24 04:55:25.024376] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:10.422 [2024-07-24 04:55:25.024568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63769 ] 00:06:10.681 [2024-07-24 04:55:25.193847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.940 [2024-07-24 04:55:25.350643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.507 04:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.507 04:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:11.507 04:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63769 00:06:11.507 04:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63769 00:06:11.507 04:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63769 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63769 ']' 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63769 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63769 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.073 killing process with pid 63769 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63769' 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63769 00:06:12.073 04:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63769 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63769 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63769 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63769 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63769 ']' 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63769) - No such process 00:06:13.975 ERROR: process (pid: 63769) is no longer running 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.975 00:06:13.975 real 0m3.335s 00:06:13.975 user 0m3.458s 00:06:13.975 sys 0m0.594s 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.975 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 ************************************ 00:06:13.975 END TEST default_locks 00:06:13.975 ************************************ 00:06:13.975 04:55:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.975 04:55:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.975 04:55:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.975 04:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 ************************************ 00:06:13.975 START TEST default_locks_via_rpc 00:06:13.975 ************************************ 00:06:13.975 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:13.975 04:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63833 00:06:13.975 04:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63833 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63833 ']' 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.976 04:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.976 [2024-07-24 04:55:28.412032] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:13.976 [2024-07-24 04:55:28.412238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63833 ] 00:06:13.976 [2024-07-24 04:55:28.585671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.234 [2024-07-24 04:55:28.745175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63833 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63833 00:06:14.800 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.059 04:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63833 00:06:15.059 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63833 ']' 00:06:15.059 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63833 00:06:15.059 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63833 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.317 killing process with pid 63833 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63833' 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63833 00:06:15.317 04:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63833 00:06:17.247 00:06:17.247 real 0m3.170s 00:06:17.247 user 0m3.249s 00:06:17.247 sys 0m0.525s 00:06:17.247 04:55:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.247 ************************************ 00:06:17.247 END TEST default_locks_via_rpc 00:06:17.247 ************************************ 00:06:17.247 04:55:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.247 04:55:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.247 04:55:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.247 04:55:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.247 04:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.247 ************************************ 00:06:17.247 START TEST non_locking_app_on_locked_coremask 00:06:17.247 ************************************ 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63898 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63898 /var/tmp/spdk.sock 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63898 ']' 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.247 04:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.247 [2024-07-24 04:55:31.637391] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:17.247 [2024-07-24 04:55:31.637579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63898 ] 00:06:17.247 [2024-07-24 04:55:31.809831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.506 [2024-07-24 04:55:31.969222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.073 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63914 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63914 /var/tmp/spdk2.sock 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63914 ']' 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.074 04:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.074 [2024-07-24 04:55:32.667714] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:18.074 [2024-07-24 04:55:32.667907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63914 ] 00:06:18.332 [2024-07-24 04:55:32.842596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.332 [2024-07-24 04:55:32.842679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.591 [2024-07-24 04:55:33.153843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.966 04:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.967 04:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.967 04:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63898 00:06:19.967 04:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63898 00:06:19.967 04:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.534 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63898 00:06:20.534 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63898 ']' 00:06:20.534 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63898 00:06:20.534 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63898 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.793 killing process with pid 63898 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63898' 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63898 00:06:20.793 04:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63898 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63914 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63914 ']' 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63914 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.080 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63914 00:06:24.339 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.339 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.339 killing process with pid 63914 00:06:24.339 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63914' 00:06:24.339 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63914 00:06:24.339 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63914 00:06:26.243 00:06:26.243 real 0m8.954s 00:06:26.243 user 0m9.363s 00:06:26.243 sys 0m1.197s 00:06:26.244 04:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.244 04:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.244 ************************************ 00:06:26.244 END TEST non_locking_app_on_locked_coremask 00:06:26.244 ************************************ 00:06:26.244 04:55:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.244 04:55:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.244 04:55:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.244 04:55:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.244 ************************************ 00:06:26.244 START TEST locking_app_on_unlocked_coremask 00:06:26.244 ************************************ 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64040 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64040 /var/tmp/spdk.sock 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64040 ']' 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.244 04:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.244 [2024-07-24 04:55:40.641435] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:26.244 [2024-07-24 04:55:40.641645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64040 ] 00:06:26.244 [2024-07-24 04:55:40.811184] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.244 [2024-07-24 04:55:40.811293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.502 [2024-07-24 04:55:40.962475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64056 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64056 /var/tmp/spdk2.sock 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64056 ']' 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.070 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.070 [2024-07-24 04:55:41.647857] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:27.070 [2024-07-24 04:55:41.648033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64056 ] 00:06:27.329 [2024-07-24 04:55:41.807932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.587 [2024-07-24 04:55:42.128966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.961 04:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.961 04:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:28.961 04:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64056 00:06:28.961 04:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64056 00:06:28.961 04:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.527 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64040 00:06:29.527 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64040 ']' 00:06:29.527 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64040 00:06:29.786 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64040 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.787 killing process with pid 64040 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64040' 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64040 00:06:29.787 04:55:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64040 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64056 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64056 ']' 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64056 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64056 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.083 killing process with pid 64056 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64056' 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64056 00:06:33.083 04:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64056 00:06:34.987 00:06:34.987 real 0m8.897s 00:06:34.987 user 0m9.348s 00:06:34.987 sys 0m1.090s 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.987 ************************************ 00:06:34.987 END TEST locking_app_on_unlocked_coremask 00:06:34.987 ************************************ 00:06:34.987 04:55:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.987 04:55:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.987 04:55:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.987 04:55:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.987 ************************************ 00:06:34.987 START TEST locking_app_on_locked_coremask 00:06:34.987 ************************************ 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64175 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64175 /var/tmp/spdk.sock 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64175 ']' 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.987 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.988 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.988 04:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.988 [2024-07-24 04:55:49.594909] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:34.988 [2024-07-24 04:55:49.595763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64175 ] 00:06:35.247 [2024-07-24 04:55:49.771050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.505 [2024-07-24 04:55:49.975796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64191 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64191 /var/tmp/spdk2.sock 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64191 /var/tmp/spdk2.sock 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64191 /var/tmp/spdk2.sock 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64191 ']' 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.073 04:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.073 [2024-07-24 04:55:50.700993] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:36.073 [2024-07-24 04:55:50.701183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64191 ] 00:06:36.332 [2024-07-24 04:55:50.870429] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64175 has claimed it. 00:06:36.332 [2024-07-24 04:55:50.870544] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.899 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64191) - No such process 00:06:36.899 ERROR: process (pid: 64191) is no longer running 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64175 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64175 00:06:36.899 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64175 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64175 ']' 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64175 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64175 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.158 killing process with pid 64175 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64175' 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64175 00:06:37.158 04:55:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64175 00:06:39.063 00:06:39.063 real 0m3.931s 00:06:39.063 user 0m4.289s 00:06:39.063 sys 0m0.653s 00:06:39.063 04:55:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.063 04:55:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.063 ************************************ 00:06:39.063 END TEST locking_app_on_locked_coremask 00:06:39.063 ************************************ 00:06:39.063 04:55:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.063 04:55:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.063 04:55:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.063 04:55:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.063 ************************************ 00:06:39.063 START TEST locking_overlapped_coremask 00:06:39.063 ************************************ 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64255 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64255 /var/tmp/spdk.sock 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64255 ']' 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.063 04:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.063 [2024-07-24 04:55:53.554611] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:39.063 [2024-07-24 04:55:53.554761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64255 ] 00:06:39.321 [2024-07-24 04:55:53.709027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.321 [2024-07-24 04:55:53.862550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.321 [2024-07-24 04:55:53.862692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.321 [2024-07-24 04:55:53.862702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64273 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64273 /var/tmp/spdk2.sock 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64273 /var/tmp/spdk2.sock 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.887 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64273 /var/tmp/spdk2.sock 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64273 ']' 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.888 04:55:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.148 [2024-07-24 04:55:54.619405] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:40.148 [2024-07-24 04:55:54.619609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64273 ] 00:06:40.406 [2024-07-24 04:55:54.797428] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64255 has claimed it. 00:06:40.406 [2024-07-24 04:55:54.797514] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.664 ERROR: process (pid: 64273) is no longer running 00:06:40.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64273) - No such process 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64255 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64255 ']' 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64255 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64255 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.664 killing process with pid 64255 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64255' 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64255 00:06:40.664 04:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64255 00:06:42.567 ************************************ 00:06:42.567 END TEST locking_overlapped_coremask 00:06:42.567 ************************************ 00:06:42.567 00:06:42.567 real 0m3.646s 00:06:42.567 user 0m9.664s 00:06:42.567 sys 0m0.515s 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 04:55:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.567 04:55:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.567 04:55:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.567 04:55:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 ************************************ 00:06:42.567 START TEST locking_overlapped_coremask_via_rpc 00:06:42.567 ************************************ 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64330 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64330 /var/tmp/spdk.sock 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64330 ']' 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.567 04:55:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.826 [2024-07-24 04:55:57.256922] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:42.826 [2024-07-24 04:55:57.257071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64330 ] 00:06:42.826 [2024-07-24 04:55:57.410639] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.826 [2024-07-24 04:55:57.410705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.085 [2024-07-24 04:55:57.564108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.085 [2024-07-24 04:55:57.564243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.085 [2024-07-24 04:55:57.564259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64346 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64346 /var/tmp/spdk2.sock 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64346 ']' 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.652 04:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.910 [2024-07-24 04:55:58.284058] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:43.910 [2024-07-24 04:55:58.284412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64346 ] 00:06:43.910 [2024-07-24 04:55:58.452223] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.910 [2024-07-24 04:55:58.452289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.169 [2024-07-24 04:55:58.782925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.169 [2024-07-24 04:55:58.782995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.169 [2024-07-24 04:55:58.783019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.545 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.546 [2024-07-24 04:56:00.077045] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64330 has claimed it. 00:06:45.546 request: 00:06:45.546 { 00:06:45.546 "method": "framework_enable_cpumask_locks", 00:06:45.546 "req_id": 1 00:06:45.546 } 00:06:45.546 Got JSON-RPC error response 00:06:45.546 response: 00:06:45.546 { 00:06:45.546 "code": -32603, 00:06:45.546 "message": "Failed to claim CPU core: 2" 00:06:45.546 } 00:06:45.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64330 /var/tmp/spdk.sock 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64330 ']' 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.546 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.804 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.804 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.804 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64346 /var/tmp/spdk2.sock 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64346 ']' 00:06:45.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.805 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.063 ************************************ 00:06:46.063 END TEST locking_overlapped_coremask_via_rpc 00:06:46.063 ************************************ 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.063 00:06:46.063 real 0m3.478s 00:06:46.063 user 0m1.337s 00:06:46.063 sys 0m0.189s 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.063 04:56:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.063 04:56:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.063 04:56:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64330 ]] 00:06:46.064 04:56:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64330 00:06:46.064 04:56:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64330 ']' 00:06:46.064 04:56:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64330 00:06:46.064 04:56:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:46.064 04:56:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.064 04:56:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64330 00:06:46.322 killing process with pid 64330 00:06:46.322 04:56:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.322 04:56:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.322 04:56:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64330' 00:06:46.322 04:56:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64330 00:06:46.322 04:56:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64330 00:06:48.248 04:56:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64346 ]] 00:06:48.248 04:56:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64346 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64346 ']' 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64346 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64346 00:06:48.248 killing process with pid 64346 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64346' 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64346 00:06:48.248 04:56:02 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64346 00:06:50.167 04:56:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.167 04:56:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:50.167 04:56:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64330 ]] 00:06:50.168 04:56:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64330 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64330 ']' 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64330 00:06:50.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64330) - No such process 00:06:50.168 Process with pid 64330 is not found 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64330 is not found' 00:06:50.168 04:56:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64346 ]] 00:06:50.168 04:56:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64346 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64346 ']' 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64346 00:06:50.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64346) - No such process 00:06:50.168 Process with pid 64346 is not found 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64346 is not found' 00:06:50.168 04:56:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.168 00:06:50.168 real 0m39.703s 00:06:50.168 user 1m8.008s 00:06:50.168 sys 0m5.638s 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.168 04:56:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 ************************************ 00:06:50.168 END TEST cpu_locks 00:06:50.168 ************************************ 00:06:50.168 ************************************ 00:06:50.168 END TEST event 00:06:50.168 ************************************ 00:06:50.168 00:06:50.168 real 1m9.860s 00:06:50.168 user 2m6.320s 00:06:50.168 sys 0m9.062s 00:06:50.168 04:56:04 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.168 04:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 04:56:04 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.168 04:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.168 04:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.168 04:56:04 -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 ************************************ 00:06:50.168 START TEST thread 00:06:50.168 ************************************ 00:06:50.168 04:56:04 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.168 * Looking for test storage... 00:06:50.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:50.168 04:56:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.168 04:56:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:50.168 04:56:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.168 04:56:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 ************************************ 00:06:50.168 START TEST thread_poller_perf 00:06:50.168 ************************************ 00:06:50.168 04:56:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.168 [2024-07-24 04:56:04.729251] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:50.168 [2024-07-24 04:56:04.729397] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64518 ] 00:06:50.427 [2024-07-24 04:56:04.890649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.686 [2024-07-24 04:56:05.116825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.686 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.064 ====================================== 00:06:52.064 busy:2211750486 (cyc) 00:06:52.064 total_run_count: 359000 00:06:52.064 tsc_hz: 2200000000 (cyc) 00:06:52.064 ====================================== 00:06:52.064 poller_cost: 6160 (cyc), 2800 (nsec) 00:06:52.064 00:06:52.064 real 0m1.761s 00:06:52.064 user 0m1.560s 00:06:52.064 sys 0m0.093s 00:06:52.064 04:56:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.064 04:56:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.064 ************************************ 00:06:52.064 END TEST thread_poller_perf 00:06:52.064 ************************************ 00:06:52.064 04:56:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.064 04:56:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:52.064 04:56:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.064 04:56:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.064 ************************************ 00:06:52.064 START TEST thread_poller_perf 00:06:52.064 ************************************ 00:06:52.064 04:56:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.064 [2024-07-24 04:56:06.554596] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:52.064 [2024-07-24 04:56:06.554764] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64557 ] 00:06:52.323 [2024-07-24 04:56:06.722652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.323 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:52.323 [2024-07-24 04:56:06.870152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.697 ====================================== 00:06:53.697 busy:2203885834 (cyc) 00:06:53.697 total_run_count: 4614000 00:06:53.697 tsc_hz: 2200000000 (cyc) 00:06:53.697 ====================================== 00:06:53.697 poller_cost: 477 (cyc), 216 (nsec) 00:06:53.697 00:06:53.697 real 0m1.674s 00:06:53.697 user 0m1.466s 00:06:53.697 sys 0m0.100s 00:06:53.697 04:56:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.697 ************************************ 00:06:53.697 END TEST thread_poller_perf 00:06:53.697 ************************************ 00:06:53.697 04:56:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.697 04:56:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.697 00:06:53.697 real 0m3.632s 00:06:53.697 user 0m3.099s 00:06:53.697 sys 0m0.303s 00:06:53.697 04:56:08 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.697 04:56:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.697 ************************************ 00:06:53.697 END TEST thread 00:06:53.697 ************************************ 00:06:53.697 04:56:08 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:53.697 04:56:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.697 04:56:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.697 04:56:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.697 ************************************ 00:06:53.697 START TEST accel 00:06:53.697 ************************************ 00:06:53.697 04:56:08 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:53.955 * Looking for test storage... 00:06:53.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:53.956 04:56:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:53.956 04:56:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:53.956 04:56:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.956 04:56:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64632 00:06:53.956 04:56:08 accel -- accel/accel.sh@63 -- # waitforlisten 64632 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@829 -- # '[' -z 64632 ']' 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.956 04:56:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.956 04:56:08 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:53.956 04:56:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:53.956 04:56:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.956 04:56:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.956 04:56:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.956 04:56:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.956 04:56:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.956 04:56:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.956 04:56:08 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.956 [2024-07-24 04:56:08.485056] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:53.956 [2024-07-24 04:56:08.485247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64632 ] 00:06:54.214 [2024-07-24 04:56:08.661619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.473 [2024-07-24 04:56:08.881525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.041 04:56:09 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.041 04:56:09 accel -- common/autotest_common.sh@862 -- # return 0 00:06:55.041 04:56:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:55.041 04:56:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:55.041 04:56:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:55.041 04:56:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:55.041 04:56:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:55.041 04:56:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:55.041 04:56:09 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.041 04:56:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:55.041 04:56:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 04:56:09 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.041 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.041 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.041 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.042 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.042 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.042 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.042 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.042 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.042 04:56:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.042 04:56:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.042 04:56:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.042 04:56:09 accel -- accel/accel.sh@75 -- # killprocess 64632 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@948 -- # '[' -z 64632 ']' 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@952 -- # kill -0 64632 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@953 -- # uname 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64632 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64632' 00:06:55.042 killing process with pid 64632 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@967 -- # kill 64632 00:06:55.042 04:56:09 accel -- common/autotest_common.sh@972 -- # wait 64632 00:06:56.947 04:56:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:56.947 04:56:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.947 04:56:11 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:56.947 04:56:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:56.947 04:56:11 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.947 04:56:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:56.947 04:56:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.947 04:56:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.947 ************************************ 00:06:56.947 START TEST accel_missing_filename 00:06:56.947 ************************************ 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.947 04:56:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:56.947 04:56:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:56.947 04:56:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:56.948 04:56:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:56.948 [2024-07-24 04:56:11.480833] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:56.948 [2024-07-24 04:56:11.481026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64702 ] 00:06:57.207 [2024-07-24 04:56:11.649449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.207 [2024-07-24 04:56:11.797792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.465 [2024-07-24 04:56:11.950582] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.724 [2024-07-24 04:56:12.317275] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.292 A filename is required. 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.292 00:06:58.292 real 0m1.225s 00:06:58.292 user 0m1.023s 00:06:58.292 sys 0m0.145s 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.292 04:56:12 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 ************************************ 00:06:58.292 END TEST accel_missing_filename 00:06:58.292 ************************************ 00:06:58.292 04:56:12 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.292 04:56:12 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.292 04:56:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.292 04:56:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 ************************************ 00:06:58.292 START TEST accel_compress_verify 00:06:58.292 ************************************ 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.292 04:56:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.292 04:56:12 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.292 [2024-07-24 04:56:12.757506] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:58.293 [2024-07-24 04:56:12.757679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64739 ] 00:06:58.551 [2024-07-24 04:56:12.926873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.551 [2024-07-24 04:56:13.083377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.813 [2024-07-24 04:56:13.230359] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.074 [2024-07-24 04:56:13.612208] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:59.333 00:06:59.333 Compression does not support the verify option, aborting. 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.333 00:06:59.333 real 0m1.257s 00:06:59.333 user 0m1.053s 00:06:59.333 sys 0m0.144s 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.333 04:56:13 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:59.333 ************************************ 00:06:59.333 END TEST accel_compress_verify 00:06:59.333 ************************************ 00:06:59.593 04:56:14 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:59.593 04:56:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.594 04:56:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.594 04:56:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 ************************************ 00:06:59.594 START TEST accel_wrong_workload 00:06:59.594 ************************************ 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:59.594 04:56:14 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:59.594 Unsupported workload type: foobar 00:06:59.594 [2024-07-24 04:56:14.063622] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:59.594 accel_perf options: 00:06:59.594 [-h help message] 00:06:59.594 [-q queue depth per core] 00:06:59.594 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.594 [-T number of threads per core 00:06:59.594 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.594 [-t time in seconds] 00:06:59.594 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.594 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.594 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.594 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.594 [-S for crc32c workload, use this seed value (default 0) 00:06:59.594 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.594 [-f for fill workload, use this BYTE value (default 255) 00:06:59.594 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.594 [-y verify result if this switch is on] 00:06:59.594 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.594 Can be used to spread operations across a wider range of memory. 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.594 00:06:59.594 real 0m0.070s 00:06:59.594 user 0m0.090s 00:06:59.594 sys 0m0.030s 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.594 04:56:14 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 ************************************ 00:06:59.594 END TEST accel_wrong_workload 00:06:59.594 ************************************ 00:06:59.594 04:56:14 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.594 04:56:14 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:59.594 04:56:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.594 04:56:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 ************************************ 00:06:59.594 START TEST accel_negative_buffers 00:06:59.594 ************************************ 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:59.594 04:56:14 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:59.594 -x option must be non-negative. 00:06:59.594 [2024-07-24 04:56:14.185352] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.594 accel_perf options: 00:06:59.594 [-h help message] 00:06:59.594 [-q queue depth per core] 00:06:59.594 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.594 [-T number of threads per core 00:06:59.594 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.594 [-t time in seconds] 00:06:59.594 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.594 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.594 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.594 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.594 [-S for crc32c workload, use this seed value (default 0) 00:06:59.594 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.594 [-f for fill workload, use this BYTE value (default 255) 00:06:59.594 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.594 [-y verify result if this switch is on] 00:06:59.594 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.594 Can be used to spread operations across a wider range of memory. 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.594 00:06:59.594 real 0m0.072s 00:06:59.594 user 0m0.071s 00:06:59.594 sys 0m0.041s 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.594 ************************************ 00:06:59.594 END TEST accel_negative_buffers 00:06:59.594 ************************************ 00:06:59.594 04:56:14 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:59.854 04:56:14 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.854 04:56:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.854 04:56:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.854 04:56:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.854 ************************************ 00:06:59.854 START TEST accel_crc32c 00:06:59.854 ************************************ 00:06:59.854 04:56:14 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.854 04:56:14 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.854 [2024-07-24 04:56:14.302326] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:59.854 [2024-07-24 04:56:14.302491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64811 ] 00:06:59.854 [2024-07-24 04:56:14.471458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.113 [2024-07-24 04:56:14.619955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.373 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:02.280 04:56:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.280 00:07:02.280 real 0m2.228s 00:07:02.280 user 0m1.985s 00:07:02.280 sys 0m0.152s 00:07:02.280 04:56:16 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.280 ************************************ 00:07:02.280 END TEST accel_crc32c 00:07:02.280 ************************************ 00:07:02.280 04:56:16 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:02.280 04:56:16 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:02.280 04:56:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.280 04:56:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.280 04:56:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.280 ************************************ 00:07:02.280 START TEST accel_crc32c_C2 00:07:02.280 ************************************ 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.280 04:56:16 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:02.280 [2024-07-24 04:56:16.583545] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:02.280 [2024-07-24 04:56:16.583755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:07:02.280 [2024-07-24 04:56:16.752642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.280 [2024-07-24 04:56:16.906970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.540 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.447 00:07:04.447 real 0m2.231s 00:07:04.447 user 0m0.015s 00:07:04.447 sys 0m0.005s 00:07:04.447 ************************************ 00:07:04.447 END TEST accel_crc32c_C2 00:07:04.447 ************************************ 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.447 04:56:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:04.447 04:56:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:04.447 04:56:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.447 04:56:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.447 04:56:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.447 ************************************ 00:07:04.447 START TEST accel_copy 00:07:04.447 ************************************ 00:07:04.447 04:56:18 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:04.447 04:56:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:04.447 [2024-07-24 04:56:18.865704] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:04.447 [2024-07-24 04:56:18.865914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64900 ] 00:07:04.447 [2024-07-24 04:56:19.036395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.706 [2024-07-24 04:56:19.188134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.966 04:56:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:06.870 04:56:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.870 00:07:06.870 real 0m2.237s 00:07:06.870 user 0m2.009s 00:07:06.870 sys 0m0.135s 00:07:06.870 ************************************ 00:07:06.870 END TEST accel_copy 00:07:06.870 ************************************ 00:07:06.870 04:56:21 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.870 04:56:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.870 04:56:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.870 04:56:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:06.870 04:56:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.870 04:56:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.870 ************************************ 00:07:06.870 START TEST accel_fill 00:07:06.870 ************************************ 00:07:06.870 04:56:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:06.870 04:56:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:06.870 [2024-07-24 04:56:21.163151] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:06.870 [2024-07-24 04:56:21.163337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64941 ] 00:07:06.871 [2024-07-24 04:56:21.334125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.143 [2024-07-24 04:56:21.529041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.143 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.144 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.092 ************************************ 00:07:09.092 END TEST accel_fill 00:07:09.092 ************************************ 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:09.092 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.092 00:07:09.092 real 0m2.316s 00:07:09.092 user 0m2.071s 00:07:09.092 sys 0m0.152s 00:07:09.092 04:56:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.092 04:56:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:09.092 04:56:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.092 04:56:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:09.092 04:56:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.092 04:56:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.092 ************************************ 00:07:09.092 START TEST accel_copy_crc32c 00:07:09.092 ************************************ 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.092 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.093 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.093 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.093 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.093 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:09.093 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:09.093 [2024-07-24 04:56:23.537774] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:09.093 [2024-07-24 04:56:23.538022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64982 ] 00:07:09.093 [2024-07-24 04:56:23.711575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.352 [2024-07-24 04:56:23.865566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.612 04:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.519 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:11.520 ************************************ 00:07:11.520 END TEST accel_copy_crc32c 00:07:11.520 ************************************ 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.520 00:07:11.520 real 0m2.280s 00:07:11.520 user 0m2.024s 00:07:11.520 sys 0m0.159s 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.520 04:56:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:11.520 04:56:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.520 04:56:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.520 04:56:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.520 04:56:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.520 ************************************ 00:07:11.520 START TEST accel_copy_crc32c_C2 00:07:11.520 ************************************ 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.520 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:11.520 [2024-07-24 04:56:25.872256] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:11.520 [2024-07-24 04:56:25.872442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65028 ] 00:07:11.520 [2024-07-24 04:56:26.046581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.779 [2024-07-24 04:56:26.207888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.779 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.780 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.687 00:07:13.687 real 0m2.285s 00:07:13.687 user 0m2.039s 00:07:13.687 sys 0m0.152s 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.687 ************************************ 00:07:13.687 END TEST accel_copy_crc32c_C2 00:07:13.687 ************************************ 00:07:13.687 04:56:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:13.687 04:56:28 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.687 04:56:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.687 04:56:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.687 04:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.687 ************************************ 00:07:13.687 START TEST accel_dualcast 00:07:13.687 ************************************ 00:07:13.687 04:56:28 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:13.687 04:56:28 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:13.687 [2024-07-24 04:56:28.212575] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:13.687 [2024-07-24 04:56:28.212775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65075 ] 00:07:13.946 [2024-07-24 04:56:28.385981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.946 [2024-07-24 04:56:28.537773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.206 04:56:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.207 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.207 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:16.113 04:56:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.113 00:07:16.113 real 0m2.278s 00:07:16.113 user 0m2.031s 00:07:16.113 sys 0m0.150s 00:07:16.113 04:56:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.113 ************************************ 00:07:16.113 END TEST accel_dualcast 00:07:16.113 ************************************ 00:07:16.114 04:56:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:16.114 04:56:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:16.114 04:56:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.114 04:56:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.114 04:56:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.114 ************************************ 00:07:16.114 START TEST accel_compare 00:07:16.114 ************************************ 00:07:16.114 04:56:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:16.114 04:56:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:16.114 [2024-07-24 04:56:30.537632] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:16.114 [2024-07-24 04:56:30.537780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65116 ] 00:07:16.114 [2024-07-24 04:56:30.694691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.373 [2024-07-24 04:56:30.868272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.633 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:18.544 04:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.544 00:07:18.544 real 0m2.258s 00:07:18.544 user 0m2.018s 00:07:18.544 sys 0m0.140s 00:07:18.544 04:56:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.544 ************************************ 00:07:18.544 END TEST accel_compare 00:07:18.544 ************************************ 00:07:18.544 04:56:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:18.544 04:56:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:18.544 04:56:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.544 04:56:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.544 04:56:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.544 ************************************ 00:07:18.544 START TEST accel_xor 00:07:18.544 ************************************ 00:07:18.545 04:56:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:18.545 04:56:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:18.545 [2024-07-24 04:56:32.860976] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:18.545 [2024-07-24 04:56:32.861135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65157 ] 00:07:18.545 [2024-07-24 04:56:33.030616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.804 [2024-07-24 04:56:33.203882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.804 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.805 04:56:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:20.711 04:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.711 00:07:20.711 real 0m2.364s 00:07:20.711 user 0m2.120s 00:07:20.711 sys 0m0.147s 00:07:20.711 ************************************ 00:07:20.711 END TEST accel_xor 00:07:20.712 ************************************ 00:07:20.712 04:56:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.712 04:56:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:20.712 04:56:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.712 04:56:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.712 04:56:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.712 04:56:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.712 ************************************ 00:07:20.712 START TEST accel_xor 00:07:20.712 ************************************ 00:07:20.712 04:56:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.712 04:56:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.712 [2024-07-24 04:56:35.277160] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:20.712 [2024-07-24 04:56:35.277316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65204 ] 00:07:20.971 [2024-07-24 04:56:35.450412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.971 [2024-07-24 04:56:35.599073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.230 04:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.135 04:56:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.135 00:07:23.135 real 0m2.224s 00:07:23.135 user 0m1.990s 00:07:23.135 sys 0m0.137s 00:07:23.135 04:56:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.135 04:56:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.135 ************************************ 00:07:23.135 END TEST accel_xor 00:07:23.135 ************************************ 00:07:23.135 04:56:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:23.135 04:56:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:23.135 04:56:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.135 04:56:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.135 ************************************ 00:07:23.135 START TEST accel_dif_verify 00:07:23.135 ************************************ 00:07:23.135 04:56:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:23.135 04:56:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:23.136 04:56:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:23.136 [2024-07-24 04:56:37.564637] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:23.136 [2024-07-24 04:56:37.564802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65250 ] 00:07:23.136 [2024-07-24 04:56:37.739399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.394 [2024-07-24 04:56:37.925423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.651 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.652 04:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:25.555 04:56:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.555 00:07:25.555 real 0m2.408s 00:07:25.555 user 0m2.151s 00:07:25.555 sys 0m0.165s 00:07:25.555 ************************************ 00:07:25.555 END TEST accel_dif_verify 00:07:25.555 ************************************ 00:07:25.555 04:56:39 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.555 04:56:39 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.555 04:56:39 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:25.555 04:56:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:25.555 04:56:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.555 04:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.555 ************************************ 00:07:25.555 START TEST accel_dif_generate 00:07:25.555 ************************************ 00:07:25.555 04:56:39 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:25.555 04:56:39 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:25.555 [2024-07-24 04:56:40.005364] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:25.555 [2024-07-24 04:56:40.005543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65291 ] 00:07:25.555 [2024-07-24 04:56:40.175287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.814 [2024-07-24 04:56:40.356709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.073 04:56:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:27.978 04:56:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.978 00:07:27.978 real 0m2.318s 00:07:27.978 user 0m2.080s 00:07:27.978 sys 0m0.146s 00:07:27.978 ************************************ 00:07:27.978 END TEST accel_dif_generate 00:07:27.978 ************************************ 00:07:27.978 04:56:42 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.978 04:56:42 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:27.978 04:56:42 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:27.978 04:56:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:27.978 04:56:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.978 04:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.978 ************************************ 00:07:27.978 START TEST accel_dif_generate_copy 00:07:27.978 ************************************ 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:27.978 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:27.978 [2024-07-24 04:56:42.376336] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:27.978 [2024-07-24 04:56:42.376501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65338 ] 00:07:27.978 [2024-07-24 04:56:42.548935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.238 [2024-07-24 04:56:42.701650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.238 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.497 04:56:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.402 00:07:30.402 real 0m2.261s 00:07:30.402 user 0m2.033s 00:07:30.402 sys 0m0.135s 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.402 ************************************ 00:07:30.402 END TEST accel_dif_generate_copy 00:07:30.402 ************************************ 00:07:30.402 04:56:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.402 04:56:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:30.402 04:56:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.402 04:56:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:30.402 04:56:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.402 04:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.402 ************************************ 00:07:30.402 START TEST accel_comp 00:07:30.402 ************************************ 00:07:30.402 04:56:44 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.402 04:56:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:30.402 04:56:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:30.402 04:56:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.402 04:56:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.402 04:56:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:30.403 04:56:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:30.403 [2024-07-24 04:56:44.692304] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:30.403 [2024-07-24 04:56:44.692462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65379 ] 00:07:30.403 [2024-07-24 04:56:44.862727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.403 [2024-07-24 04:56:45.023623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.663 04:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:32.569 04:56:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.569 00:07:32.569 real 0m2.260s 00:07:32.569 user 0m2.024s 00:07:32.569 sys 0m0.141s 00:07:32.569 ************************************ 00:07:32.569 END TEST accel_comp 00:07:32.569 ************************************ 00:07:32.569 04:56:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.569 04:56:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:32.569 04:56:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.569 04:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:32.569 04:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.569 04:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.569 ************************************ 00:07:32.569 START TEST accel_decomp 00:07:32.569 ************************************ 00:07:32.569 04:56:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:32.569 04:56:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:32.569 [2024-07-24 04:56:47.005064] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:32.569 [2024-07-24 04:56:47.005326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65424 ] 00:07:32.569 [2024-07-24 04:56:47.178950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.828 [2024-07-24 04:56:47.328437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.088 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.089 04:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.994 04:56:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.994 00:07:34.994 real 0m2.250s 00:07:34.994 user 0m2.015s 00:07:34.994 sys 0m0.140s 00:07:34.994 04:56:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.994 04:56:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:34.994 ************************************ 00:07:34.994 END TEST accel_decomp 00:07:34.994 ************************************ 00:07:34.994 04:56:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.994 04:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:34.994 04:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.994 04:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.994 ************************************ 00:07:34.994 START TEST accel_decomp_full 00:07:34.994 ************************************ 00:07:34.994 04:56:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:34.994 04:56:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:34.994 [2024-07-24 04:56:49.306329] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:34.994 [2024-07-24 04:56:49.306482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65466 ] 00:07:34.994 [2024-07-24 04:56:49.472084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.994 [2024-07-24 04:56:49.621615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.254 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.255 04:56:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.180 04:56:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.180 00:07:37.180 real 0m2.263s 00:07:37.180 user 0m2.037s 00:07:37.180 sys 0m0.134s 00:07:37.180 ************************************ 00:07:37.180 04:56:51 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.180 04:56:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:37.180 END TEST accel_decomp_full 00:07:37.180 ************************************ 00:07:37.180 04:56:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.180 04:56:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:37.180 04:56:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.180 04:56:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.180 ************************************ 00:07:37.180 START TEST accel_decomp_mcore 00:07:37.180 ************************************ 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.180 04:56:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.180 [2024-07-24 04:56:51.609267] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:37.180 [2024-07-24 04:56:51.609396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65513 ] 00:07:37.180 [2024-07-24 04:56:51.760299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.450 [2024-07-24 04:56:51.917731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.450 [2024-07-24 04:56:51.917921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.451 [2024-07-24 04:56:51.918026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.451 [2024-07-24 04:56:51.918258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.709 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.709 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.709 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.709 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.710 04:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.610 00:07:39.610 real 0m2.286s 00:07:39.610 user 0m6.859s 00:07:39.610 sys 0m0.144s 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.610 04:56:53 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.610 ************************************ 00:07:39.610 END TEST accel_decomp_mcore 00:07:39.610 ************************************ 00:07:39.610 04:56:53 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.611 04:56:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:39.611 04:56:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.611 04:56:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.611 ************************************ 00:07:39.611 START TEST accel_decomp_full_mcore 00:07:39.611 ************************************ 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:39.611 04:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:39.611 [2024-07-24 04:56:53.963918] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:39.611 [2024-07-24 04:56:53.964082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65557 ] 00:07:39.611 [2024-07-24 04:56:54.132295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.870 [2024-07-24 04:56:54.292961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.870 [2024-07-24 04:56:54.293112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.870 [2024-07-24 04:56:54.293240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.870 [2024-07-24 04:56:54.293440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.870 04:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.771 00:07:41.771 real 0m2.349s 00:07:41.771 user 0m6.965s 00:07:41.771 sys 0m0.172s 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.771 ************************************ 00:07:41.771 END TEST accel_decomp_full_mcore 00:07:41.771 ************************************ 00:07:41.771 04:56:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:41.771 04:56:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.771 04:56:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:41.771 04:56:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.771 04:56:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.771 ************************************ 00:07:41.771 START TEST accel_decomp_mthread 00:07:41.771 ************************************ 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.771 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.772 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.772 [2024-07-24 04:56:56.360666] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:41.772 [2024-07-24 04:56:56.360827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65601 ] 00:07:42.030 [2024-07-24 04:56:56.529267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.289 [2024-07-24 04:56:56.689013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.289 04:56:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.194 04:56:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.194 00:07:44.194 real 0m2.237s 00:07:44.194 user 0m2.005s 00:07:44.195 sys 0m0.140s 00:07:44.195 04:56:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.195 04:56:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:44.195 ************************************ 00:07:44.195 END TEST accel_decomp_mthread 00:07:44.195 ************************************ 00:07:44.195 04:56:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.195 04:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:44.195 04:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.195 04:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.195 ************************************ 00:07:44.195 START TEST accel_decomp_full_mthread 00:07:44.195 ************************************ 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:44.195 04:56:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:44.195 [2024-07-24 04:56:58.644862] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:44.195 [2024-07-24 04:56:58.645039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65642 ] 00:07:44.195 [2024-07-24 04:56:58.796154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.454 [2024-07-24 04:56:58.944777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.713 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.714 04:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.619 00:07:46.619 real 0m2.284s 00:07:46.619 user 0m2.061s 00:07:46.619 sys 0m0.130s 00:07:46.619 ************************************ 00:07:46.619 END TEST accel_decomp_full_mthread 00:07:46.619 ************************************ 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.619 04:57:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 04:57:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:46.619 04:57:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.619 04:57:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:46.619 04:57:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.619 04:57:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.619 04:57:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.619 04:57:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.619 04:57:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.619 04:57:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 04:57:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.619 04:57:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.619 04:57:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:46.619 04:57:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:46.619 ************************************ 00:07:46.619 START TEST accel_dif_functional_tests 00:07:46.619 ************************************ 00:07:46.619 04:57:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.619 [2024-07-24 04:57:01.035192] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:46.619 [2024-07-24 04:57:01.035386] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65695 ] 00:07:46.619 [2024-07-24 04:57:01.205573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.878 [2024-07-24 04:57:01.362461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.878 [2024-07-24 04:57:01.362549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.878 [2024-07-24 04:57:01.362564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.137 00:07:47.137 00:07:47.137 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.137 http://cunit.sourceforge.net/ 00:07:47.137 00:07:47.137 00:07:47.137 Suite: accel_dif 00:07:47.137 Test: verify: DIF generated, GUARD check ...passed 00:07:47.137 Test: verify: DIF generated, APPTAG check ...passed 00:07:47.137 Test: verify: DIF generated, REFTAG check ...passed 00:07:47.137 Test: verify: DIF not generated, GUARD check ...passed 00:07:47.137 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 04:57:01.617155] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.137 [2024-07-24 04:57:01.617321] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.137 passed 00:07:47.137 Test: verify: DIF not generated, REFTAG check ...passed 00:07:47.137 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:47.137 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 04:57:01.617392] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.137 passed 00:07:47.137 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-24 04:57:01.617560] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:47.137 passed 00:07:47.137 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:47.137 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:47.137 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:47.137 Test: verify copy: DIF generated, GUARD check ...[2024-07-24 04:57:01.618086] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:47.137 passed 00:07:47.137 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:47.137 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:47.137 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:47.137 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 04:57:01.618597] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.137 [2024-07-24 04:57:01.618753] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.137 passed 00:07:47.137 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 04:57:01.619034] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.137 passed 00:07:47.137 Test: generate copy: DIF generated, GUARD check ...passed 00:07:47.137 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:47.137 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:47.137 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:47.137 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:47.138 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:47.138 Test: generate copy: iovecs-len validate ...[2024-07-24 04:57:01.619703] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:47.138 passed 00:07:47.138 Test: generate copy: buffer alignment validate ...passed 00:07:47.138 00:07:47.138 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.138 suites 1 1 n/a 0 0 00:07:47.138 tests 26 26 26 0 0 00:07:47.138 asserts 115 115 115 0 n/a 00:07:47.138 00:07:47.138 Elapsed time = 0.008 seconds 00:07:48.074 00:07:48.074 real 0m1.686s 00:07:48.074 user 0m3.167s 00:07:48.074 sys 0m0.194s 00:07:48.074 ************************************ 00:07:48.074 END TEST accel_dif_functional_tests 00:07:48.074 ************************************ 00:07:48.074 04:57:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.074 04:57:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 00:07:48.074 real 0m54.386s 00:07:48.074 user 0m59.519s 00:07:48.074 sys 0m4.689s 00:07:48.074 ************************************ 00:07:48.074 END TEST accel 00:07:48.074 ************************************ 00:07:48.074 04:57:02 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.074 04:57:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.333 04:57:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:48.333 04:57:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.333 04:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.333 04:57:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.333 ************************************ 00:07:48.333 START TEST accel_rpc 00:07:48.333 ************************************ 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:48.333 * Looking for test storage... 00:07:48.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:48.333 04:57:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:48.333 04:57:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65772 00:07:48.333 04:57:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 65772 00:07:48.333 04:57:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 65772 ']' 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.333 04:57:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.333 [2024-07-24 04:57:02.899536] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:48.333 [2024-07-24 04:57:02.899959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65772 ] 00:07:48.592 [2024-07-24 04:57:03.062342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.863 [2024-07-24 04:57:03.237104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.435 04:57:03 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.435 04:57:03 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:49.435 04:57:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:49.435 04:57:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:49.435 04:57:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:49.435 04:57:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:49.435 04:57:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:49.435 04:57:03 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.435 04:57:03 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.435 04:57:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.435 ************************************ 00:07:49.435 START TEST accel_assign_opcode 00:07:49.435 ************************************ 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.435 [2024-07-24 04:57:03.873959] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.435 [2024-07-24 04:57:03.881926] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.435 04:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:50.003 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.003 software 00:07:50.004 00:07:50.004 real 0m0.626s 00:07:50.004 user 0m0.048s 00:07:50.004 sys 0m0.010s 00:07:50.004 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.004 04:57:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.004 ************************************ 00:07:50.004 END TEST accel_assign_opcode 00:07:50.004 ************************************ 00:07:50.004 04:57:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 65772 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 65772 ']' 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 65772 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65772 00:07:50.004 killing process with pid 65772 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65772' 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@967 -- # kill 65772 00:07:50.004 04:57:04 accel_rpc -- common/autotest_common.sh@972 -- # wait 65772 00:07:51.909 00:07:51.909 real 0m3.586s 00:07:51.909 user 0m3.672s 00:07:51.909 sys 0m0.468s 00:07:51.909 04:57:06 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.909 ************************************ 00:07:51.909 END TEST accel_rpc 00:07:51.909 ************************************ 00:07:51.909 04:57:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.909 04:57:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.909 04:57:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.909 04:57:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.909 04:57:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.909 ************************************ 00:07:51.909 START TEST app_cmdline 00:07:51.909 ************************************ 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.909 * Looking for test storage... 00:07:51.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.909 04:57:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.909 04:57:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65882 00:07:51.909 04:57:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.909 04:57:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65882 00:07:51.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 65882 ']' 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.909 04:57:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.909 [2024-07-24 04:57:06.528115] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:51.909 [2024-07-24 04:57:06.528245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65882 ] 00:07:52.168 [2024-07-24 04:57:06.686937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.426 [2024-07-24 04:57:06.844621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.995 04:57:07 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.995 04:57:07 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:52.995 04:57:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:53.276 { 00:07:53.276 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:07:53.276 "fields": { 00:07:53.276 "major": 24, 00:07:53.276 "minor": 9, 00:07:53.276 "patch": 0, 00:07:53.276 "suffix": "-pre", 00:07:53.276 "commit": "78cbcfdde" 00:07:53.276 } 00:07:53.276 } 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.276 04:57:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.276 04:57:07 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.534 request: 00:07:53.534 { 00:07:53.534 "method": "env_dpdk_get_mem_stats", 00:07:53.534 "req_id": 1 00:07:53.534 } 00:07:53.534 Got JSON-RPC error response 00:07:53.534 response: 00:07:53.534 { 00:07:53.534 "code": -32601, 00:07:53.534 "message": "Method not found" 00:07:53.534 } 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.534 04:57:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65882 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 65882 ']' 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 65882 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65882 00:07:53.534 killing process with pid 65882 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65882' 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@967 -- # kill 65882 00:07:53.534 04:57:08 app_cmdline -- common/autotest_common.sh@972 -- # wait 65882 00:07:55.439 00:07:55.440 real 0m3.419s 00:07:55.440 user 0m3.901s 00:07:55.440 sys 0m0.454s 00:07:55.440 04:57:09 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.440 ************************************ 00:07:55.440 END TEST app_cmdline 00:07:55.440 ************************************ 00:07:55.440 04:57:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 04:57:09 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.440 04:57:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.440 04:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.440 04:57:09 -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 ************************************ 00:07:55.440 START TEST version 00:07:55.440 ************************************ 00:07:55.440 04:57:09 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.440 * Looking for test storage... 00:07:55.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.440 04:57:09 version -- app/version.sh@17 -- # get_header_version major 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # cut -f2 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.440 04:57:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.440 04:57:09 version -- app/version.sh@17 -- # major=24 00:07:55.440 04:57:09 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # cut -f2 00:07:55.440 04:57:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.440 04:57:09 version -- app/version.sh@18 -- # minor=9 00:07:55.440 04:57:09 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.440 04:57:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # cut -f2 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.440 04:57:09 version -- app/version.sh@19 -- # patch=0 00:07:55.440 04:57:09 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.440 04:57:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # cut -f2 00:07:55.440 04:57:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.440 04:57:09 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.440 04:57:09 version -- app/version.sh@22 -- # version=24.9 00:07:55.440 04:57:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.440 04:57:09 version -- app/version.sh@28 -- # version=24.9rc0 00:07:55.440 04:57:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:55.440 04:57:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.440 04:57:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:55.440 04:57:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:55.440 00:07:55.440 real 0m0.154s 00:07:55.440 user 0m0.099s 00:07:55.440 sys 0m0.086s 00:07:55.440 ************************************ 00:07:55.440 END TEST version 00:07:55.440 ************************************ 00:07:55.440 04:57:09 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.440 04:57:09 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 04:57:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:55.440 04:57:10 -- spdk/autotest.sh@198 -- # uname -s 00:07:55.440 04:57:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:55.440 04:57:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:55.440 04:57:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:55.440 04:57:10 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:07:55.440 04:57:10 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:55.440 04:57:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.440 04:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.440 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 ************************************ 00:07:55.440 START TEST blockdev_nvme 00:07:55.440 ************************************ 00:07:55.440 04:57:10 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:55.699 * Looking for test storage... 00:07:55.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:55.699 04:57:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66044 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66044 00:07:55.699 04:57:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66044 ']' 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.699 04:57:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.699 [2024-07-24 04:57:10.252019] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:55.699 [2024-07-24 04:57:10.252191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66044 ] 00:07:55.958 [2024-07-24 04:57:10.426693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.958 [2024-07-24 04:57:10.573568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.895 04:57:11 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.895 04:57:11 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.895 04:57:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:56.895 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:57.155 04:57:11 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:57.155 04:57:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:57.156 04:57:11 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "47af083d-fc8a-4689-b669-9ac3f65cb235"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "47af083d-fc8a-4689-b669-9ac3f65cb235",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5a555716-fcad-459b-a859-d8064a37db11"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5a555716-fcad-459b-a859-d8064a37db11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2b67c1da-b8fd-46ad-b178-daa932d24421"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b67c1da-b8fd-46ad-b178-daa932d24421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7f814219-e362-46b2-b026-5db79cf162d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f814219-e362-46b2-b026-5db79cf162d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "01426afe-f687-46ea-9a1e-f2d439d9bfb6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "01426afe-f687-46ea-9a1e-f2d439d9bfb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0679e6a9-dbaf-47ad-9771-8243aee7c5f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0679e6a9-dbaf-47ad-9771-8243aee7c5f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:57.415 04:57:11 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:57.415 04:57:11 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:57.415 04:57:11 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:57.415 04:57:11 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 66044 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66044 ']' 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66044 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66044 00:07:57.415 killing process with pid 66044 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66044' 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66044 00:07:57.415 04:57:11 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66044 00:07:59.320 04:57:13 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:59.320 04:57:13 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:59.320 04:57:13 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.320 04:57:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.320 04:57:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.320 ************************************ 00:07:59.320 START TEST bdev_hello_world 00:07:59.320 ************************************ 00:07:59.320 04:57:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:59.320 [2024-07-24 04:57:13.656748] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:59.320 [2024-07-24 04:57:13.656933] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66132 ] 00:07:59.320 [2024-07-24 04:57:13.824699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.579 [2024-07-24 04:57:13.984921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.146 [2024-07-24 04:57:14.532464] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:00.146 [2024-07-24 04:57:14.532515] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:00.146 [2024-07-24 04:57:14.532557] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:00.146 [2024-07-24 04:57:14.535170] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:00.146 [2024-07-24 04:57:14.535791] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:00.146 [2024-07-24 04:57:14.535971] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:00.146 [2024-07-24 04:57:14.536225] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:00.146 00:08:00.146 [2024-07-24 04:57:14.536260] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:01.082 ************************************ 00:08:01.082 END TEST bdev_hello_world 00:08:01.082 ************************************ 00:08:01.082 00:08:01.082 real 0m1.934s 00:08:01.082 user 0m1.617s 00:08:01.082 sys 0m0.208s 00:08:01.082 04:57:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.082 04:57:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 04:57:15 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:01.082 04:57:15 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.082 04:57:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.082 04:57:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 ************************************ 00:08:01.082 START TEST bdev_bounds 00:08:01.082 ************************************ 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66170 00:08:01.082 Process bdevio pid: 66170 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66170' 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66170 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66170 ']' 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.082 04:57:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 [2024-07-24 04:57:15.650712] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:01.082 [2024-07-24 04:57:15.651152] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66170 ] 00:08:01.340 [2024-07-24 04:57:15.821981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.599 [2024-07-24 04:57:15.975538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.599 [2024-07-24 04:57:15.975591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.599 [2024-07-24 04:57:15.975604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.165 04:57:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.165 04:57:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:02.166 04:57:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:02.166 I/O targets: 00:08:02.166 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:02.166 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:02.166 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:02.166 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:02.166 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:02.166 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:02.166 00:08:02.166 00:08:02.166 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.166 http://cunit.sourceforge.net/ 00:08:02.166 00:08:02.166 00:08:02.166 Suite: bdevio tests on: Nvme3n1 00:08:02.166 Test: blockdev write read block ...passed 00:08:02.166 Test: blockdev write zeroes read block ...passed 00:08:02.166 Test: blockdev write zeroes read no split ...passed 00:08:02.166 Test: blockdev write zeroes read split ...passed 00:08:02.166 Test: blockdev write zeroes read split partial ...passed 00:08:02.166 Test: blockdev reset ...[2024-07-24 04:57:16.718179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:02.166 [2024-07-24 04:57:16.721970] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.166 passed 00:08:02.166 Test: blockdev write read 8 blocks ...passed 00:08:02.166 Test: blockdev write read size > 128k ...passed 00:08:02.166 Test: blockdev write read invalid size ...passed 00:08:02.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.166 Test: blockdev write read max offset ...passed 00:08:02.166 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.166 Test: blockdev writev readv 8 blocks ...passed 00:08:02.166 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.166 Test: blockdev writev readv block ...passed 00:08:02.166 Test: blockdev writev readv size > 128k ...passed 00:08:02.166 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.166 Test: blockdev comparev and writev ...[2024-07-24 04:57:16.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x277a0a000 len:0x1000 00:08:02.166 [2024-07-24 04:57:16.731120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.166 passed 00:08:02.166 Test: blockdev nvme passthru rw ...passed 00:08:02.166 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.166 Test: blockdev nvme admin passthru ...[2024-07-24 04:57:16.732001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.166 [2024-07-24 04:57:16.732043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.166 passed 00:08:02.166 Test: blockdev copy ...passed 00:08:02.166 Suite: bdevio tests on: Nvme2n3 00:08:02.166 Test: blockdev write read block ...passed 00:08:02.166 Test: blockdev write zeroes read block ...passed 00:08:02.166 Test: blockdev write zeroes read no split ...passed 00:08:02.166 Test: blockdev write zeroes read split ...passed 00:08:02.425 Test: blockdev write zeroes read split partial ...passed 00:08:02.425 Test: blockdev reset ...[2024-07-24 04:57:16.814690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:02.425 [2024-07-24 04:57:16.818646] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.425 passed 00:08:02.425 Test: blockdev write read 8 blocks ...passed 00:08:02.425 Test: blockdev write read size > 128k ...passed 00:08:02.425 Test: blockdev write read invalid size ...passed 00:08:02.425 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.425 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.425 Test: blockdev write read max offset ...passed 00:08:02.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.425 Test: blockdev writev readv 8 blocks ...passed 00:08:02.425 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.425 Test: blockdev writev readv block ...passed 00:08:02.425 Test: blockdev writev readv size > 128k ...passed 00:08:02.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.425 Test: blockdev comparev and writev ...[2024-07-24 04:57:16.829658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fe04000 len:0x1000 00:08:02.425 [2024-07-24 04:57:16.829729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev nvme passthru rw ...passed 00:08:02.425 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.425 Test: blockdev nvme admin passthru ...[2024-07-24 04:57:16.830712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.425 [2024-07-24 04:57:16.830756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev copy ...passed 00:08:02.425 Suite: bdevio tests on: Nvme2n2 00:08:02.425 Test: blockdev write read block ...passed 00:08:02.425 Test: blockdev write zeroes read block ...passed 00:08:02.425 Test: blockdev write zeroes read no split ...passed 00:08:02.425 Test: blockdev write zeroes read split ...passed 00:08:02.425 Test: blockdev write zeroes read split partial ...passed 00:08:02.425 Test: blockdev reset ...[2024-07-24 04:57:16.901435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:02.425 [2024-07-24 04:57:16.905279] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.425 passed 00:08:02.425 Test: blockdev write read 8 blocks ...passed 00:08:02.425 Test: blockdev write read size > 128k ...passed 00:08:02.425 Test: blockdev write read invalid size ...passed 00:08:02.425 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.425 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.425 Test: blockdev write read max offset ...passed 00:08:02.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.425 Test: blockdev writev readv 8 blocks ...passed 00:08:02.425 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.425 Test: blockdev writev readv block ...passed 00:08:02.425 Test: blockdev writev readv size > 128k ...passed 00:08:02.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.425 Test: blockdev comparev and writev ...[2024-07-24 04:57:16.914974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27943a000 len:0x1000 00:08:02.425 [2024-07-24 04:57:16.915025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev nvme passthru rw ...passed 00:08:02.425 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:57:16.916022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.425 [2024-07-24 04:57:16.916074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev nvme admin passthru ...passed 00:08:02.425 Test: blockdev copy ...passed 00:08:02.425 Suite: bdevio tests on: Nvme2n1 00:08:02.425 Test: blockdev write read block ...passed 00:08:02.425 Test: blockdev write zeroes read block ...passed 00:08:02.425 Test: blockdev write zeroes read no split ...passed 00:08:02.425 Test: blockdev write zeroes read split ...passed 00:08:02.425 Test: blockdev write zeroes read split partial ...passed 00:08:02.425 Test: blockdev reset ...[2024-07-24 04:57:16.988821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:02.425 [2024-07-24 04:57:16.992946] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.425 passed 00:08:02.425 Test: blockdev write read 8 blocks ...passed 00:08:02.425 Test: blockdev write read size > 128k ...passed 00:08:02.425 Test: blockdev write read invalid size ...passed 00:08:02.425 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.425 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.425 Test: blockdev write read max offset ...passed 00:08:02.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.425 Test: blockdev writev readv 8 blocks ...passed 00:08:02.425 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.425 Test: blockdev writev readv block ...passed 00:08:02.425 Test: blockdev writev readv size > 128k ...passed 00:08:02.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.425 Test: blockdev comparev and writev ...[2024-07-24 04:57:17.001864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279434000 len:0x1000 00:08:02.425 [2024-07-24 04:57:17.001931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev nvme passthru rw ...passed 00:08:02.425 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:57:17.002908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.425 [2024-07-24 04:57:17.002950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.425 passed 00:08:02.425 Test: blockdev nvme admin passthru ...passed 00:08:02.425 Test: blockdev copy ...passed 00:08:02.425 Suite: bdevio tests on: Nvme1n1 00:08:02.425 Test: blockdev write read block ...passed 00:08:02.425 Test: blockdev write zeroes read block ...passed 00:08:02.425 Test: blockdev write zeroes read no split ...passed 00:08:02.425 Test: blockdev write zeroes read split ...passed 00:08:02.684 Test: blockdev write zeroes read split partial ...passed 00:08:02.684 Test: blockdev reset ...[2024-07-24 04:57:17.074661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:02.684 [2024-07-24 04:57:17.077978] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.684 passed 00:08:02.684 Test: blockdev write read 8 blocks ...passed 00:08:02.684 Test: blockdev write read size > 128k ...passed 00:08:02.684 Test: blockdev write read invalid size ...passed 00:08:02.684 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.684 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.684 Test: blockdev write read max offset ...passed 00:08:02.684 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.684 Test: blockdev writev readv 8 blocks ...passed 00:08:02.684 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.684 Test: blockdev writev readv block ...passed 00:08:02.684 Test: blockdev writev readv size > 128k ...passed 00:08:02.684 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.684 Test: blockdev comparev and writev ...[2024-07-24 04:57:17.086865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279430000 len:0x1000 00:08:02.684 [2024-07-24 04:57:17.086929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.684 passed 00:08:02.684 Test: blockdev nvme passthru rw ...passed 00:08:02.684 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:57:17.087787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.684 [2024-07-24 04:57:17.087849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.684 passed 00:08:02.684 Test: blockdev nvme admin passthru ...passed 00:08:02.684 Test: blockdev copy ...passed 00:08:02.684 Suite: bdevio tests on: Nvme0n1 00:08:02.684 Test: blockdev write read block ...passed 00:08:02.684 Test: blockdev write zeroes read block ...passed 00:08:02.684 Test: blockdev write zeroes read no split ...passed 00:08:02.684 Test: blockdev write zeroes read split ...passed 00:08:02.684 Test: blockdev write zeroes read split partial ...passed 00:08:02.684 Test: blockdev reset ...[2024-07-24 04:57:17.145996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:02.684 [2024-07-24 04:57:17.149373] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.684 passed 00:08:02.684 Test: blockdev write read 8 blocks ...passed 00:08:02.684 Test: blockdev write read size > 128k ...passed 00:08:02.684 Test: blockdev write read invalid size ...passed 00:08:02.684 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.684 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.684 Test: blockdev write read max offset ...passed 00:08:02.685 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.685 Test: blockdev writev readv 8 blocks ...passed 00:08:02.685 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.685 Test: blockdev writev readv block ...passed 00:08:02.685 Test: blockdev writev readv size > 128k ...passed 00:08:02.685 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.685 Test: blockdev comparev and writev ...passed 00:08:02.685 Test: blockdev nvme passthru rw ...[2024-07-24 04:57:17.157503] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:02.685 separate metadata which is not supported yet. 00:08:02.685 passed 00:08:02.685 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.685 Test: blockdev nvme admin passthru ...[2024-07-24 04:57:17.158170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:02.685 [2024-07-24 04:57:17.158239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:02.685 passed 00:08:02.685 Test: blockdev copy ...passed 00:08:02.685 00:08:02.685 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.685 suites 6 6 n/a 0 0 00:08:02.685 tests 138 138 138 0 0 00:08:02.685 asserts 893 893 893 0 n/a 00:08:02.685 00:08:02.685 Elapsed time = 1.358 seconds 00:08:02.685 0 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66170 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66170 ']' 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66170 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66170 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66170' 00:08:02.685 killing process with pid 66170 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66170 00:08:02.685 04:57:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66170 00:08:03.622 04:57:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:03.622 00:08:03.622 real 0m2.523s 00:08:03.622 user 0m6.243s 00:08:03.622 sys 0m0.339s 00:08:03.622 04:57:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.622 04:57:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:03.622 ************************************ 00:08:03.622 END TEST bdev_bounds 00:08:03.622 ************************************ 00:08:03.622 04:57:18 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.622 04:57:18 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:03.622 04:57:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.622 04:57:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.622 ************************************ 00:08:03.622 START TEST bdev_nbd 00:08:03.622 ************************************ 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66235 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66235 /var/tmp/spdk-nbd.sock 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66235 ']' 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:03.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.622 04:57:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.622 [2024-07-24 04:57:18.207030] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:03.622 [2024-07-24 04:57:18.207422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.881 [2024-07-24 04:57:18.366549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.140 [2024-07-24 04:57:18.520127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.709 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:04.968 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.969 1+0 records in 00:08:04.969 1+0 records out 00:08:04.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475976 s, 8.6 MB/s 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.969 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.229 1+0 records in 00:08:05.229 1+0 records out 00:08:05.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628492 s, 6.5 MB/s 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.229 04:57:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.488 1+0 records in 00:08:05.488 1+0 records out 00:08:05.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103561 s, 4.0 MB/s 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.488 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.747 1+0 records in 00:08:05.747 1+0 records out 00:08:05.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746876 s, 5.5 MB/s 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.747 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.748 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.006 1+0 records in 00:08:06.006 1+0 records out 00:08:06.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098826 s, 4.1 MB/s 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:06.006 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.266 1+0 records in 00:08:06.266 1+0 records out 00:08:06.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100086 s, 4.1 MB/s 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:06.266 04:57:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.525 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd0", 00:08:06.525 "bdev_name": "Nvme0n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd1", 00:08:06.525 "bdev_name": "Nvme1n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd2", 00:08:06.525 "bdev_name": "Nvme2n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd3", 00:08:06.525 "bdev_name": "Nvme2n2" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd4", 00:08:06.525 "bdev_name": "Nvme2n3" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd5", 00:08:06.525 "bdev_name": "Nvme3n1" 00:08:06.525 } 00:08:06.525 ]' 00:08:06.525 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:06.525 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd0", 00:08:06.525 "bdev_name": "Nvme0n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd1", 00:08:06.525 "bdev_name": "Nvme1n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd2", 00:08:06.525 "bdev_name": "Nvme2n1" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd3", 00:08:06.525 "bdev_name": "Nvme2n2" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd4", 00:08:06.525 "bdev_name": "Nvme2n3" 00:08:06.525 }, 00:08:06.525 { 00:08:06.525 "nbd_device": "/dev/nbd5", 00:08:06.525 "bdev_name": "Nvme3n1" 00:08:06.525 } 00:08:06.525 ]' 00:08:06.525 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.787 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.056 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.315 04:57:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.574 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.833 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.092 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.352 04:57:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:08.611 /dev/nbd0 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.611 1+0 records in 00:08:08.611 1+0 records out 00:08:08.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461249 s, 8.9 MB/s 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.611 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:08.876 /dev/nbd1 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.876 1+0 records in 00:08:08.876 1+0 records out 00:08:08.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606691 s, 6.8 MB/s 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.876 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:09.141 /dev/nbd10 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.141 1+0 records in 00:08:09.141 1+0 records out 00:08:09.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663858 s, 6.2 MB/s 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.141 04:57:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:09.399 /dev/nbd11 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:09.659 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.659 1+0 records in 00:08:09.659 1+0 records out 00:08:09.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666549 s, 6.1 MB/s 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:09.660 /dev/nbd12 00:08:09.660 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.919 1+0 records in 00:08:09.919 1+0 records out 00:08:09.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895328 s, 4.6 MB/s 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.919 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:10.178 /dev/nbd13 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.178 1+0 records in 00:08:10.178 1+0 records out 00:08:10.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965027 s, 4.2 MB/s 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.178 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd0", 00:08:10.437 "bdev_name": "Nvme0n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd1", 00:08:10.437 "bdev_name": "Nvme1n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd10", 00:08:10.437 "bdev_name": "Nvme2n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd11", 00:08:10.437 "bdev_name": "Nvme2n2" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd12", 00:08:10.437 "bdev_name": "Nvme2n3" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd13", 00:08:10.437 "bdev_name": "Nvme3n1" 00:08:10.437 } 00:08:10.437 ]' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd0", 00:08:10.437 "bdev_name": "Nvme0n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd1", 00:08:10.437 "bdev_name": "Nvme1n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd10", 00:08:10.437 "bdev_name": "Nvme2n1" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd11", 00:08:10.437 "bdev_name": "Nvme2n2" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd12", 00:08:10.437 "bdev_name": "Nvme2n3" 00:08:10.437 }, 00:08:10.437 { 00:08:10.437 "nbd_device": "/dev/nbd13", 00:08:10.437 "bdev_name": "Nvme3n1" 00:08:10.437 } 00:08:10.437 ]' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.437 /dev/nbd1 00:08:10.437 /dev/nbd10 00:08:10.437 /dev/nbd11 00:08:10.437 /dev/nbd12 00:08:10.437 /dev/nbd13' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.437 /dev/nbd1 00:08:10.437 /dev/nbd10 00:08:10.437 /dev/nbd11 00:08:10.437 /dev/nbd12 00:08:10.437 /dev/nbd13' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.437 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.438 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.438 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:10.438 256+0 records in 00:08:10.438 256+0 records out 00:08:10.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00746525 s, 140 MB/s 00:08:10.438 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.438 04:57:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.696 256+0 records in 00:08:10.696 256+0 records out 00:08:10.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186225 s, 5.6 MB/s 00:08:10.696 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.696 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.696 256+0 records in 00:08:10.696 256+0 records out 00:08:10.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193971 s, 5.4 MB/s 00:08:10.696 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.696 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:10.955 256+0 records in 00:08:10.955 256+0 records out 00:08:10.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181522 s, 5.8 MB/s 00:08:10.955 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.955 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:11.214 256+0 records in 00:08:11.215 256+0 records out 00:08:11.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188143 s, 5.6 MB/s 00:08:11.215 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.215 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:11.215 256+0 records in 00:08:11.215 256+0 records out 00:08:11.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179744 s, 5.8 MB/s 00:08:11.473 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.473 04:57:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:11.473 256+0 records in 00:08:11.473 256+0 records out 00:08:11.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181111 s, 5.8 MB/s 00:08:11.473 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:11.473 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.474 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.042 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.610 04:57:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.610 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.868 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.125 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.125 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.125 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:13.125 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.125 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.126 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.383 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.383 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.383 04:57:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:13.642 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:13.901 malloc_lvol_verify 00:08:13.901 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.160 5edf7a9b-f9c3-41dd-8d52-ea47d4731e43 00:08:14.160 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.418 1e8061c8-8397-4af3-af1a-cc696e25f12f 00:08:14.418 04:57:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:14.418 /dev/nbd0 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:14.418 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.418 Discarding device blocks: 0/4096 done 00:08:14.418 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:14.418 00:08:14.418 Allocating group tables: 0/1 done 00:08:14.418 Writing inode tables: 0/1 done 00:08:14.418 Creating journal (1024 blocks): done 00:08:14.418 Writing superblocks and filesystem accounting information: 0/1 done 00:08:14.418 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.418 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66235 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66235 ']' 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66235 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:14.677 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66235 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:14.935 killing process with pid 66235 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66235' 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66235 00:08:14.935 04:57:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66235 00:08:15.881 04:57:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:15.881 00:08:15.881 real 0m12.226s 00:08:15.881 user 0m17.230s 00:08:15.881 sys 0m3.871s 00:08:15.881 04:57:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.881 04:57:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:15.881 ************************************ 00:08:15.881 END TEST bdev_nbd 00:08:15.881 ************************************ 00:08:15.881 04:57:30 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:15.881 04:57:30 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:15.881 skipping fio tests on NVMe due to multi-ns failures. 00:08:15.881 04:57:30 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:15.881 04:57:30 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:15.881 04:57:30 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:15.881 04:57:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:15.881 04:57:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.881 04:57:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.881 ************************************ 00:08:15.881 START TEST bdev_verify 00:08:15.881 ************************************ 00:08:15.881 04:57:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:15.881 [2024-07-24 04:57:30.488890] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:15.881 [2024-07-24 04:57:30.489049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66630 ] 00:08:16.153 [2024-07-24 04:57:30.651499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.411 [2024-07-24 04:57:30.823876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.411 [2024-07-24 04:57:30.823916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.977 Running I/O for 5 seconds... 00:08:22.245 00:08:22.245 Latency(us) 00:08:22.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.245 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0xbd0bd 00:08:22.245 Nvme0n1 : 5.05 1522.26 5.95 0.00 0.00 83881.94 16205.27 75306.82 00:08:22.245 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:22.245 Nvme0n1 : 5.08 1512.02 5.91 0.00 0.00 83713.68 8698.41 74353.57 00:08:22.245 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0xa0000 00:08:22.245 Nvme1n1 : 5.05 1521.71 5.94 0.00 0.00 83806.33 18469.24 72447.07 00:08:22.245 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0xa0000 length 0xa0000 00:08:22.245 Nvme1n1 : 5.04 1497.31 5.85 0.00 0.00 85164.53 16086.11 73876.95 00:08:22.245 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0x80000 00:08:22.245 Nvme2n1 : 5.05 1521.15 5.94 0.00 0.00 83700.44 17992.61 71017.19 00:08:22.245 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x80000 length 0x80000 00:08:22.245 Nvme2n1 : 5.06 1503.72 5.87 0.00 0.00 84591.36 7000.44 69110.69 00:08:22.245 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0x80000 00:08:22.245 Nvme2n2 : 5.05 1520.58 5.94 0.00 0.00 83593.96 18111.77 70063.94 00:08:22.245 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x80000 length 0x80000 00:08:22.245 Nvme2n2 : 5.07 1503.33 5.87 0.00 0.00 84434.79 5957.82 70063.94 00:08:22.245 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0x80000 00:08:22.245 Nvme2n3 : 5.05 1520.03 5.94 0.00 0.00 83477.22 14596.65 71493.82 00:08:22.245 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x80000 length 0x80000 00:08:22.245 Nvme2n3 : 5.08 1512.71 5.91 0.00 0.00 83914.19 8757.99 70540.57 00:08:22.245 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x0 length 0x20000 00:08:22.245 Nvme3n1 : 5.06 1529.91 5.98 0.00 0.00 82850.32 4438.57 74830.20 00:08:22.245 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.245 Verification LBA range: start 0x20000 length 0x20000 00:08:22.245 Nvme3n1 : 5.08 1512.37 5.91 0.00 0.00 83797.36 8757.99 71493.82 00:08:22.245 =================================================================================================================== 00:08:22.245 Total : 18177.12 71.00 0.00 0.00 83907.19 4438.57 75306.82 00:08:23.620 00:08:23.620 real 0m7.471s 00:08:23.620 user 0m13.699s 00:08:23.620 sys 0m0.251s 00:08:23.620 04:57:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.620 04:57:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:23.620 ************************************ 00:08:23.620 END TEST bdev_verify 00:08:23.620 ************************************ 00:08:23.620 04:57:37 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.620 04:57:37 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:23.620 04:57:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.621 04:57:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.621 ************************************ 00:08:23.621 START TEST bdev_verify_big_io 00:08:23.621 ************************************ 00:08:23.621 04:57:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.621 [2024-07-24 04:57:38.032655] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:23.621 [2024-07-24 04:57:38.032878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66728 ] 00:08:23.621 [2024-07-24 04:57:38.203131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.879 [2024-07-24 04:57:38.401505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.879 [2024-07-24 04:57:38.401533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.815 Running I/O for 5 seconds... 00:08:31.383 00:08:31.383 Latency(us) 00:08:31.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.383 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0xbd0b 00:08:31.383 Nvme0n1 : 5.74 122.61 7.66 0.00 0.00 1005946.37 17158.52 1189657.13 00:08:31.383 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:31.383 Nvme0n1 : 5.71 123.32 7.71 0.00 0.00 1000555.99 22163.08 957063.91 00:08:31.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0xa000 00:08:31.383 Nvme1n1 : 5.83 127.51 7.97 0.00 0.00 936140.12 47900.86 983754.94 00:08:31.383 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0xa000 length 0xa000 00:08:31.383 Nvme1n1 : 5.77 129.42 8.09 0.00 0.00 938539.41 26333.56 854112.81 00:08:31.383 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0x8000 00:08:31.383 Nvme2n1 : 5.84 127.48 7.97 0.00 0.00 899148.15 47662.55 823608.79 00:08:31.383 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x8000 length 0x8000 00:08:31.383 Nvme2n1 : 5.77 129.19 8.07 0.00 0.00 913378.57 26571.87 880803.84 00:08:31.383 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0x8000 00:08:31.383 Nvme2n2 : 5.84 121.94 7.62 0.00 0.00 905712.54 39083.29 1631965.56 00:08:31.383 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x8000 length 0x8000 00:08:31.383 Nvme2n2 : 5.78 132.90 8.31 0.00 0.00 867102.25 32648.84 911307.87 00:08:31.383 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0x8000 00:08:31.383 Nvme2n3 : 5.93 142.17 8.89 0.00 0.00 754012.92 16801.05 1662469.59 00:08:31.383 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x8000 length 0x8000 00:08:31.383 Nvme2n3 : 5.78 132.80 8.30 0.00 0.00 841373.17 33602.09 934185.89 00:08:31.383 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x0 length 0x2000 00:08:31.383 Nvme3n1 : 6.02 179.18 11.20 0.00 0.00 583773.39 718.66 1692973.61 00:08:31.383 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.383 Verification LBA range: start 0x2000 length 0x2000 00:08:31.383 Nvme3n1 : 5.83 148.94 9.31 0.00 0.00 730378.99 6315.29 949437.91 00:08:31.383 =================================================================================================================== 00:08:31.383 Total : 1617.46 101.09 0.00 0.00 850454.86 718.66 1692973.61 00:08:32.762 00:08:32.762 real 0m9.129s 00:08:32.762 user 0m16.833s 00:08:32.762 sys 0m0.304s 00:08:32.762 04:57:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.762 04:57:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:32.762 ************************************ 00:08:32.762 END TEST bdev_verify_big_io 00:08:32.762 ************************************ 00:08:32.762 04:57:47 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.762 04:57:47 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:32.762 04:57:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.762 04:57:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.762 ************************************ 00:08:32.762 START TEST bdev_write_zeroes 00:08:32.762 ************************************ 00:08:32.762 04:57:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.762 [2024-07-24 04:57:47.222591] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:32.762 [2024-07-24 04:57:47.222798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66848 ] 00:08:33.021 [2024-07-24 04:57:47.397547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.021 [2024-07-24 04:57:47.589017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.958 Running I/O for 1 seconds... 00:08:34.892 00:08:34.892 Latency(us) 00:08:34.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.892 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme0n1 : 1.02 8990.34 35.12 0.00 0.00 14185.79 6225.92 26691.03 00:08:34.892 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme1n1 : 1.02 8976.33 35.06 0.00 0.00 14184.54 11736.90 21090.68 00:08:34.892 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme2n1 : 1.02 8962.97 35.01 0.00 0.00 14157.29 11319.85 19065.02 00:08:34.892 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme2n2 : 1.02 8999.08 35.15 0.00 0.00 14075.79 8281.37 19184.17 00:08:34.892 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme2n3 : 1.03 8985.61 35.10 0.00 0.00 14070.45 8043.05 19065.02 00:08:34.892 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.892 Nvme3n1 : 1.03 8972.34 35.05 0.00 0.00 14059.07 7060.01 19422.49 00:08:34.892 =================================================================================================================== 00:08:34.892 Total : 53886.66 210.49 0.00 0.00 14121.97 6225.92 26691.03 00:08:35.837 00:08:35.837 real 0m3.327s 00:08:35.837 user 0m2.960s 00:08:35.837 sys 0m0.241s 00:08:35.837 04:57:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.837 04:57:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:35.837 ************************************ 00:08:35.837 END TEST bdev_write_zeroes 00:08:35.837 ************************************ 00:08:36.096 04:57:50 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.096 04:57:50 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:36.096 04:57:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.096 04:57:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.096 ************************************ 00:08:36.096 START TEST bdev_json_nonenclosed 00:08:36.096 ************************************ 00:08:36.096 04:57:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.096 [2024-07-24 04:57:50.608926] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:36.096 [2024-07-24 04:57:50.609116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66901 ] 00:08:36.354 [2024-07-24 04:57:50.778685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.354 [2024-07-24 04:57:50.932529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.354 [2024-07-24 04:57:50.932657] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:36.354 [2024-07-24 04:57:50.932684] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:36.354 [2024-07-24 04:57:50.932698] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.920 00:08:36.920 real 0m0.776s 00:08:36.920 user 0m0.543s 00:08:36.920 sys 0m0.127s 00:08:36.920 04:57:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.920 04:57:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 ************************************ 00:08:36.920 END TEST bdev_json_nonenclosed 00:08:36.920 ************************************ 00:08:36.920 04:57:51 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.920 04:57:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:36.920 04:57:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.920 04:57:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 ************************************ 00:08:36.920 START TEST bdev_json_nonarray 00:08:36.920 ************************************ 00:08:36.920 04:57:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.920 [2024-07-24 04:57:51.406522] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:36.920 [2024-07-24 04:57:51.406677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66931 ] 00:08:37.179 [2024-07-24 04:57:51.559605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.179 [2024-07-24 04:57:51.705674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.179 [2024-07-24 04:57:51.705810] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:37.179 [2024-07-24 04:57:51.705838] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:37.179 [2024-07-24 04:57:51.705865] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.438 00:08:37.438 real 0m0.731s 00:08:37.438 user 0m0.504s 00:08:37.438 sys 0m0.123s 00:08:37.438 04:57:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.438 04:57:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:37.438 ************************************ 00:08:37.438 END TEST bdev_json_nonarray 00:08:37.438 ************************************ 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:37.698 04:57:52 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:37.698 00:08:37.698 real 0m42.073s 00:08:37.698 user 1m3.520s 00:08:37.698 sys 0m6.272s 00:08:37.698 04:57:52 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.698 04:57:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.698 ************************************ 00:08:37.698 END TEST blockdev_nvme 00:08:37.698 ************************************ 00:08:37.698 04:57:52 -- spdk/autotest.sh@213 -- # uname -s 00:08:37.698 04:57:52 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:37.698 04:57:52 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:37.698 04:57:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.698 04:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.698 04:57:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.698 ************************************ 00:08:37.698 START TEST blockdev_nvme_gpt 00:08:37.698 ************************************ 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:37.698 * Looking for test storage... 00:08:37.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67003 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67003 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67003 ']' 00:08:37.698 04:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.698 04:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.958 [2024-07-24 04:57:52.387132] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:37.958 [2024-07-24 04:57:52.387972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67003 ] 00:08:37.958 [2024-07-24 04:57:52.560571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.217 [2024-07-24 04:57:52.714944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.785 04:57:53 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.785 04:57:53 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:08:38.785 04:57:53 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:38.785 04:57:53 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:38.785 04:57:53 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.305 Waiting for block devices as requested 00:08:39.305 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.563 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.563 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.563 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.834 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # local nvme bdf 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme2n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme2n2 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme2n3 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme3c3n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme3n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:44.834 BYT; 00:08:44.834 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:44.834 BYT; 00:08:44.834 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.834 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.834 04:57:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.835 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.835 04:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:45.771 The operation has completed successfully. 00:08:45.771 04:58:00 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:47.149 The operation has completed successfully. 00:08:47.149 04:58:01 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:47.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.978 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.978 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.978 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.978 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.978 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:47.978 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.978 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.978 [] 00:08:47.978 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.978 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:47.978 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:47.978 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:47.978 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.237 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:48.237 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.237 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.496 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.496 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:48.496 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.496 04:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.496 04:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.496 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.496 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:48.496 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:48.496 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e2118170-6a12-4e70-bb07-c294e5be8486"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e2118170-6a12-4e70-bb07-c294e5be8486",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "64b07193-5304-446e-bdce-29a365635bd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "64b07193-5304-446e-bdce-29a365635bd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "71cef12e-cbcc-45ed-ba57-2c2db2aff355"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71cef12e-cbcc-45ed-ba57-2c2db2aff355",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "87694735-2b50-4be7-a121-706ab9fdd832"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "87694735-2b50-4be7-a121-706ab9fdd832",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8c1f7189-d379-425c-996e-79c32c1a7c59"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8c1f7189-d379-425c-996e-79c32c1a7c59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:48.756 04:58:03 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 67003 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67003 ']' 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67003 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67003 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.756 killing process with pid 67003 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67003' 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67003 00:08:48.756 04:58:03 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67003 00:08:50.660 04:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:50.660 04:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:50.660 04:58:04 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:50.660 04:58:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.660 04:58:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.660 ************************************ 00:08:50.660 START TEST bdev_hello_world 00:08:50.660 ************************************ 00:08:50.660 04:58:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:50.660 [2024-07-24 04:58:04.998003] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:50.660 [2024-07-24 04:58:04.998168] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67631 ] 00:08:50.660 [2024-07-24 04:58:05.145012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.919 [2024-07-24 04:58:05.307214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.487 [2024-07-24 04:58:05.859709] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:51.487 [2024-07-24 04:58:05.859774] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:51.487 [2024-07-24 04:58:05.859813] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:51.487 [2024-07-24 04:58:05.862613] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:51.487 [2024-07-24 04:58:05.863250] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:51.487 [2024-07-24 04:58:05.863301] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:51.487 [2024-07-24 04:58:05.863515] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:51.487 00:08:51.487 [2024-07-24 04:58:05.863559] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:52.424 00:08:52.424 real 0m1.876s 00:08:52.424 user 0m1.589s 00:08:52.424 sys 0m0.179s 00:08:52.424 04:58:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.424 04:58:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:52.424 ************************************ 00:08:52.424 END TEST bdev_hello_world 00:08:52.425 ************************************ 00:08:52.425 04:58:06 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:52.425 04:58:06 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.425 04:58:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.425 04:58:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:52.425 ************************************ 00:08:52.425 START TEST bdev_bounds 00:08:52.425 ************************************ 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67673 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.425 Process bdevio pid: 67673 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67673' 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67673 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 67673 ']' 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.425 04:58:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:52.425 [2024-07-24 04:58:06.971406] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:52.425 [2024-07-24 04:58:06.971643] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67673 ] 00:08:52.684 [2024-07-24 04:58:07.148953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.684 [2024-07-24 04:58:07.308349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.684 [2024-07-24 04:58:07.308421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.684 [2024-07-24 04:58:07.308396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.621 04:58:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.621 04:58:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:53.621 04:58:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:53.621 I/O targets: 00:08:53.621 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:53.621 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:53.621 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:53.621 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.621 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.621 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.621 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:53.621 00:08:53.621 00:08:53.621 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.621 http://cunit.sourceforge.net/ 00:08:53.621 00:08:53.621 00:08:53.621 Suite: bdevio tests on: Nvme3n1 00:08:53.621 Test: blockdev write read block ...passed 00:08:53.621 Test: blockdev write zeroes read block ...passed 00:08:53.621 Test: blockdev write zeroes read no split ...passed 00:08:53.621 Test: blockdev write zeroes read split ...passed 00:08:53.621 Test: blockdev write zeroes read split partial ...passed 00:08:53.621 Test: blockdev reset ...[2024-07-24 04:58:08.071182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:53.621 [2024-07-24 04:58:08.075348] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.621 passed 00:08:53.621 Test: blockdev write read 8 blocks ...passed 00:08:53.621 Test: blockdev write read size > 128k ...passed 00:08:53.621 Test: blockdev write read invalid size ...passed 00:08:53.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.621 Test: blockdev write read max offset ...passed 00:08:53.621 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.621 Test: blockdev writev readv 8 blocks ...passed 00:08:53.621 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.621 Test: blockdev writev readv block ...passed 00:08:53.621 Test: blockdev writev readv size > 128k ...passed 00:08:53.621 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.621 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268606000 len:0x1000 00:08:53.621 [2024-07-24 04:58:08.084333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.621 passed 00:08:53.621 Test: blockdev nvme passthru rw ...passed 00:08:53.621 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:58:08.085204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.621 passed 00:08:53.621 Test: blockdev nvme admin passthru ...[2024-07-24 04:58:08.085264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.621 passed 00:08:53.621 Test: blockdev copy ...passed 00:08:53.621 Suite: bdevio tests on: Nvme2n3 00:08:53.621 Test: blockdev write read block ...passed 00:08:53.621 Test: blockdev write zeroes read block ...passed 00:08:53.621 Test: blockdev write zeroes read no split ...passed 00:08:53.621 Test: blockdev write zeroes read split ...passed 00:08:53.621 Test: blockdev write zeroes read split partial ...passed 00:08:53.621 Test: blockdev reset ...[2024-07-24 04:58:08.143777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.621 [2024-07-24 04:58:08.148371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.621 passed 00:08:53.621 Test: blockdev write read 8 blocks ...passed 00:08:53.621 Test: blockdev write read size > 128k ...passed 00:08:53.621 Test: blockdev write read invalid size ...passed 00:08:53.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.621 Test: blockdev write read max offset ...passed 00:08:53.621 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.621 Test: blockdev writev readv 8 blocks ...passed 00:08:53.621 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.621 Test: blockdev writev readv block ...passed 00:08:53.621 Test: blockdev writev readv size > 128k ...passed 00:08:53.621 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.621 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.157373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c23c000 len:0x1000 00:08:53.621 [2024-07-24 04:58:08.157465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.621 passed 00:08:53.621 Test: blockdev nvme passthru rw ...passed 00:08:53.621 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:58:08.158374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.621 [2024-07-24 04:58:08.158433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.621 passed 00:08:53.621 Test: blockdev nvme admin passthru ...passed 00:08:53.621 Test: blockdev copy ...passed 00:08:53.621 Suite: bdevio tests on: Nvme2n2 00:08:53.621 Test: blockdev write read block ...passed 00:08:53.621 Test: blockdev write zeroes read block ...passed 00:08:53.621 Test: blockdev write zeroes read no split ...passed 00:08:53.621 Test: blockdev write zeroes read split ...passed 00:08:53.621 Test: blockdev write zeroes read split partial ...passed 00:08:53.621 Test: blockdev reset ...[2024-07-24 04:58:08.221705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.622 [2024-07-24 04:58:08.226063] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.622 passed 00:08:53.622 Test: blockdev write read 8 blocks ...passed 00:08:53.622 Test: blockdev write read size > 128k ...passed 00:08:53.622 Test: blockdev write read invalid size ...passed 00:08:53.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.622 Test: blockdev write read max offset ...passed 00:08:53.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.622 Test: blockdev writev readv 8 blocks ...passed 00:08:53.622 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.622 Test: blockdev writev readv block ...passed 00:08:53.622 Test: blockdev writev readv size > 128k ...passed 00:08:53.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.622 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.234398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c236000 len:0x1000 00:08:53.622 [2024-07-24 04:58:08.234500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.622 passed 00:08:53.622 Test: blockdev nvme passthru rw ...passed 00:08:53.622 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:58:08.235418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.622 [2024-07-24 04:58:08.235462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.622 passed 00:08:53.622 Test: blockdev nvme admin passthru ...passed 00:08:53.622 Test: blockdev copy ...passed 00:08:53.622 Suite: bdevio tests on: Nvme2n1 00:08:53.622 Test: blockdev write read block ...passed 00:08:53.622 Test: blockdev write zeroes read block ...passed 00:08:53.622 Test: blockdev write zeroes read no split ...passed 00:08:53.881 Test: blockdev write zeroes read split ...passed 00:08:53.881 Test: blockdev write zeroes read split partial ...passed 00:08:53.881 Test: blockdev reset ...[2024-07-24 04:58:08.297297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.881 [2024-07-24 04:58:08.301396] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.881 passed 00:08:53.881 Test: blockdev write read 8 blocks ...passed 00:08:53.881 Test: blockdev write read size > 128k ...passed 00:08:53.881 Test: blockdev write read invalid size ...passed 00:08:53.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.881 Test: blockdev write read max offset ...passed 00:08:53.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.881 Test: blockdev writev readv 8 blocks ...passed 00:08:53.881 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.881 Test: blockdev writev readv block ...passed 00:08:53.881 Test: blockdev writev readv size > 128k ...passed 00:08:53.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.881 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.309737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c232000 len:0x1000 00:08:53.881 [2024-07-24 04:58:08.309826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.881 passed 00:08:53.881 Test: blockdev nvme passthru rw ...passed 00:08:53.881 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:58:08.310728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.881 [2024-07-24 04:58:08.310787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.881 passed 00:08:53.881 Test: blockdev nvme admin passthru ...passed 00:08:53.881 Test: blockdev copy ...passed 00:08:53.881 Suite: bdevio tests on: Nvme1n1p2 00:08:53.881 Test: blockdev write read block ...passed 00:08:53.881 Test: blockdev write zeroes read block ...passed 00:08:53.881 Test: blockdev write zeroes read no split ...passed 00:08:53.881 Test: blockdev write zeroes read split ...passed 00:08:53.881 Test: blockdev write zeroes read split partial ...passed 00:08:53.881 Test: blockdev reset ...[2024-07-24 04:58:08.376566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:53.881 [2024-07-24 04:58:08.380472] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.881 passed 00:08:53.881 Test: blockdev write read 8 blocks ...passed 00:08:53.881 Test: blockdev write read size > 128k ...passed 00:08:53.881 Test: blockdev write read invalid size ...passed 00:08:53.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.881 Test: blockdev write read max offset ...passed 00:08:53.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.881 Test: blockdev writev readv 8 blocks ...passed 00:08:53.881 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.881 Test: blockdev writev readv block ...passed 00:08:53.881 Test: blockdev writev readv size > 128k ...passed 00:08:53.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.881 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.389587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28c22e000 len:0x1000 00:08:53.881 [2024-07-24 04:58:08.389702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.881 passed 00:08:53.881 Test: blockdev nvme passthru rw ...passed 00:08:53.881 Test: blockdev nvme passthru vendor specific ...passed 00:08:53.881 Test: blockdev nvme admin passthru ...passed 00:08:53.881 Test: blockdev copy ...passed 00:08:53.881 Suite: bdevio tests on: Nvme1n1p1 00:08:53.881 Test: blockdev write read block ...passed 00:08:53.881 Test: blockdev write zeroes read block ...passed 00:08:53.881 Test: blockdev write zeroes read no split ...passed 00:08:53.881 Test: blockdev write zeroes read split ...passed 00:08:53.881 Test: blockdev write zeroes read split partial ...passed 00:08:53.881 Test: blockdev reset ...[2024-07-24 04:58:08.445177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:53.881 [2024-07-24 04:58:08.448972] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.881 passed 00:08:53.881 Test: blockdev write read 8 blocks ...passed 00:08:53.881 Test: blockdev write read size > 128k ...passed 00:08:53.881 Test: blockdev write read invalid size ...passed 00:08:53.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.881 Test: blockdev write read max offset ...passed 00:08:53.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.881 Test: blockdev writev readv 8 blocks ...passed 00:08:53.881 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.881 Test: blockdev writev readv block ...passed 00:08:53.881 Test: blockdev writev readv size > 128k ...passed 00:08:53.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.881 Test: blockdev comparev and writev ...[2024-07-24 04:58:08.457813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27180e000 len:0x1000 00:08:53.881 [2024-07-24 04:58:08.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.881 passed 00:08:53.881 Test: blockdev nvme passthru rw ...passed 00:08:53.881 Test: blockdev nvme passthru vendor specific ...passed 00:08:53.881 Test: blockdev nvme admin passthru ...passed 00:08:53.881 Test: blockdev copy ...passed 00:08:53.881 Suite: bdevio tests on: Nvme0n1 00:08:53.881 Test: blockdev write read block ...passed 00:08:53.881 Test: blockdev write zeroes read block ...passed 00:08:53.881 Test: blockdev write zeroes read no split ...passed 00:08:53.881 Test: blockdev write zeroes read split ...passed 00:08:53.881 Test: blockdev write zeroes read split partial ...passed 00:08:53.881 Test: blockdev reset ...[2024-07-24 04:58:08.509173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:54.145 [2024-07-24 04:58:08.513012] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.145 passed 00:08:54.145 Test: blockdev write read 8 blocks ...passed 00:08:54.145 Test: blockdev write read size > 128k ...passed 00:08:54.145 Test: blockdev write read invalid size ...passed 00:08:54.145 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.145 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.145 Test: blockdev write read max offset ...passed 00:08:54.145 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.145 Test: blockdev writev readv 8 blocks ...passed 00:08:54.145 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.145 Test: blockdev writev readv block ...passed 00:08:54.145 Test: blockdev writev readv size > 128k ...passed 00:08:54.145 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.145 Test: blockdev comparev and writev ...passed[2024-07-24 04:58:08.520928] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:54.145 separate metadata which is not supported yet. 00:08:54.145 00:08:54.145 Test: blockdev nvme passthru rw ...passed 00:08:54.145 Test: blockdev nvme passthru vendor specific ...[2024-07-24 04:58:08.521577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:54.145 [2024-07-24 04:58:08.521658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:54.145 passed 00:08:54.145 Test: blockdev nvme admin passthru ...passed 00:08:54.145 Test: blockdev copy ...passed 00:08:54.145 00:08:54.145 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.145 suites 7 7 n/a 0 0 00:08:54.145 tests 161 161 161 0 0 00:08:54.145 asserts 1025 1025 1025 0 n/a 00:08:54.145 00:08:54.145 Elapsed time = 1.378 seconds 00:08:54.145 0 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67673 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 67673 ']' 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 67673 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67673 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:54.145 killing process with pid 67673 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67673' 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 67673 00:08:54.145 04:58:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 67673 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:55.120 00:08:55.120 real 0m2.576s 00:08:55.120 user 0m6.301s 00:08:55.120 sys 0m0.354s 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:55.120 ************************************ 00:08:55.120 END TEST bdev_bounds 00:08:55.120 ************************************ 00:08:55.120 04:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.120 04:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:55.120 04:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.120 04:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.120 ************************************ 00:08:55.120 START TEST bdev_nbd 00:08:55.120 ************************************ 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67733 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67733 /var/tmp/spdk-nbd.sock 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67733 ']' 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.120 04:58:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:55.120 [2024-07-24 04:58:09.570119] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:55.120 [2024-07-24 04:58:09.570277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.120 [2024-07-24 04:58:09.722295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.380 [2024-07-24 04:58:09.871303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:55.948 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.207 1+0 records in 00:08:56.207 1+0 records out 00:08:56.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542166 s, 7.6 MB/s 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.207 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.467 1+0 records in 00:08:56.467 1+0 records out 00:08:56.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511438 s, 8.0 MB/s 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.467 04:58:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.467 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.726 1+0 records in 00:08:56.726 1+0 records out 00:08:56.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724722 s, 5.7 MB/s 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.726 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.985 1+0 records in 00:08:56.985 1+0 records out 00:08:56.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719474 s, 5.7 MB/s 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.985 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.244 1+0 records in 00:08:57.244 1+0 records out 00:08:57.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903123 s, 4.5 MB/s 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.244 04:58:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.503 1+0 records in 00:08:57.503 1+0 records out 00:08:57.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00317993 s, 1.3 MB/s 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.503 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:57.762 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.021 1+0 records in 00:08:58.021 1+0 records out 00:08:58.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932023 s, 4.4 MB/s 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.021 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd0", 00:08:58.281 "bdev_name": "Nvme0n1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd1", 00:08:58.281 "bdev_name": "Nvme1n1p1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd2", 00:08:58.281 "bdev_name": "Nvme1n1p2" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd3", 00:08:58.281 "bdev_name": "Nvme2n1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd4", 00:08:58.281 "bdev_name": "Nvme2n2" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd5", 00:08:58.281 "bdev_name": "Nvme2n3" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd6", 00:08:58.281 "bdev_name": "Nvme3n1" 00:08:58.281 } 00:08:58.281 ]' 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd0", 00:08:58.281 "bdev_name": "Nvme0n1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd1", 00:08:58.281 "bdev_name": "Nvme1n1p1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd2", 00:08:58.281 "bdev_name": "Nvme1n1p2" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd3", 00:08:58.281 "bdev_name": "Nvme2n1" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd4", 00:08:58.281 "bdev_name": "Nvme2n2" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd5", 00:08:58.281 "bdev_name": "Nvme2n3" 00:08:58.281 }, 00:08:58.281 { 00:08:58.281 "nbd_device": "/dev/nbd6", 00:08:58.281 "bdev_name": "Nvme3n1" 00:08:58.281 } 00:08:58.281 ]' 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.281 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.540 04:58:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.799 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.057 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.315 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.574 04:58:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:59.574 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:59.574 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.833 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:00.092 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:00.093 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:00.352 /dev/nbd0 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.352 1+0 records in 00:09:00.352 1+0 records out 00:09:00.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656231 s, 6.2 MB/s 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:00.352 04:58:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:00.611 /dev/nbd1 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:00.611 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.870 1+0 records in 00:09:00.870 1+0 records out 00:09:00.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676631 s, 6.1 MB/s 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:00.870 /dev/nbd10 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.870 1+0 records in 00:09:00.870 1+0 records out 00:09:00.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804737 s, 5.1 MB/s 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:00.870 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.128 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.128 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:01.386 /dev/nbd11 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.386 1+0 records in 00:09:01.386 1+0 records out 00:09:01.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580795 s, 7.1 MB/s 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.386 04:58:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:01.644 /dev/nbd12 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.644 1+0 records in 00:09:01.644 1+0 records out 00:09:01.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797665 s, 5.1 MB/s 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.644 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:01.903 /dev/nbd13 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.903 1+0 records in 00:09:01.903 1+0 records out 00:09:01.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766681 s, 5.3 MB/s 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.903 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:02.162 /dev/nbd14 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.162 1+0 records in 00:09:02.162 1+0 records out 00:09:02.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909156 s, 4.5 MB/s 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.162 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd0", 00:09:02.421 "bdev_name": "Nvme0n1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd1", 00:09:02.421 "bdev_name": "Nvme1n1p1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd10", 00:09:02.421 "bdev_name": "Nvme1n1p2" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd11", 00:09:02.421 "bdev_name": "Nvme2n1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd12", 00:09:02.421 "bdev_name": "Nvme2n2" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd13", 00:09:02.421 "bdev_name": "Nvme2n3" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd14", 00:09:02.421 "bdev_name": "Nvme3n1" 00:09:02.421 } 00:09:02.421 ]' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd0", 00:09:02.421 "bdev_name": "Nvme0n1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd1", 00:09:02.421 "bdev_name": "Nvme1n1p1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd10", 00:09:02.421 "bdev_name": "Nvme1n1p2" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd11", 00:09:02.421 "bdev_name": "Nvme2n1" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd12", 00:09:02.421 "bdev_name": "Nvme2n2" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd13", 00:09:02.421 "bdev_name": "Nvme2n3" 00:09:02.421 }, 00:09:02.421 { 00:09:02.421 "nbd_device": "/dev/nbd14", 00:09:02.421 "bdev_name": "Nvme3n1" 00:09:02.421 } 00:09:02.421 ]' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:02.421 /dev/nbd1 00:09:02.421 /dev/nbd10 00:09:02.421 /dev/nbd11 00:09:02.421 /dev/nbd12 00:09:02.421 /dev/nbd13 00:09:02.421 /dev/nbd14' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:02.421 /dev/nbd1 00:09:02.421 /dev/nbd10 00:09:02.421 /dev/nbd11 00:09:02.421 /dev/nbd12 00:09:02.421 /dev/nbd13 00:09:02.421 /dev/nbd14' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:02.421 256+0 records in 00:09:02.421 256+0 records out 00:09:02.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728784 s, 144 MB/s 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.421 04:58:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:02.680 256+0 records in 00:09:02.680 256+0 records out 00:09:02.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18921 s, 5.5 MB/s 00:09:02.680 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.680 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:02.940 256+0 records in 00:09:02.940 256+0 records out 00:09:02.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189323 s, 5.5 MB/s 00:09:02.940 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.940 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:02.940 256+0 records in 00:09:02.940 256+0 records out 00:09:02.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201807 s, 5.2 MB/s 00:09:02.940 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.940 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:03.199 256+0 records in 00:09:03.199 256+0 records out 00:09:03.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180849 s, 5.8 MB/s 00:09:03.199 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.199 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:03.458 256+0 records in 00:09:03.458 256+0 records out 00:09:03.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158865 s, 6.6 MB/s 00:09:03.458 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.458 04:58:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:03.458 256+0 records in 00:09:03.458 256+0 records out 00:09:03.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17933 s, 5.8 MB/s 00:09:03.458 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.458 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:03.717 256+0 records in 00:09:03.717 256+0 records out 00:09:03.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190937 s, 5.5 MB/s 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.717 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.285 04:58:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.544 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:04.802 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.803 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.803 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.803 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.061 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:05.062 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.062 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.062 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.062 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.321 04:58:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.580 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:05.839 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:05.840 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:06.098 malloc_lvol_verify 00:09:06.098 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:06.357 bded8d76-4e7a-45a4-a444-48d010ae5b56 00:09:06.357 04:58:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:06.616 193019d8-2da7-4d7b-a594-dcdbe6b43249 00:09:06.617 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:06.617 /dev/nbd0 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:06.878 mke2fs 1.46.5 (30-Dec-2021) 00:09:06.878 Discarding device blocks: 0/4096 done 00:09:06.878 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:06.878 00:09:06.878 Allocating group tables: 0/1 done 00:09:06.878 Writing inode tables: 0/1 done 00:09:06.878 Creating journal (1024 blocks): done 00:09:06.878 Writing superblocks and filesystem accounting information: 0/1 done 00:09:06.878 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.878 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67733 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67733 ']' 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67733 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67733 00:09:07.139 killing process with pid 67733 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67733' 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67733 00:09:07.139 04:58:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67733 00:09:08.075 04:58:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:08.075 00:09:08.075 real 0m13.139s 00:09:08.075 user 0m18.293s 00:09:08.075 sys 0m4.377s 00:09:08.075 04:58:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.075 04:58:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:08.075 ************************************ 00:09:08.075 END TEST bdev_nbd 00:09:08.075 ************************************ 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:08.075 skipping fio tests on NVMe due to multi-ns failures. 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:08.075 04:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:08.075 04:58:22 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:08.075 04:58:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.075 04:58:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.075 ************************************ 00:09:08.075 START TEST bdev_verify 00:09:08.075 ************************************ 00:09:08.075 04:58:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:08.334 [2024-07-24 04:58:22.757213] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:08.334 [2024-07-24 04:58:22.757348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68172 ] 00:09:08.334 [2024-07-24 04:58:22.909454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.592 [2024-07-24 04:58:23.068291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.592 [2024-07-24 04:58:23.068306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.158 Running I/O for 5 seconds... 00:09:14.421 00:09:14.421 Latency(us) 00:09:14.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.422 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0xbd0bd 00:09:14.422 Nvme0n1 : 5.06 1366.11 5.34 0.00 0.00 93418.67 22520.55 95325.09 00:09:14.422 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:14.422 Nvme0n1 : 5.10 1379.44 5.39 0.00 0.00 91459.25 9592.09 87699.08 00:09:14.422 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x4ff80 00:09:14.422 Nvme1n1p1 : 5.06 1365.50 5.33 0.00 0.00 93327.69 25022.84 93895.21 00:09:14.422 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:14.422 Nvme1n1p1 : 5.05 1367.63 5.34 0.00 0.00 93240.45 22639.71 87699.08 00:09:14.422 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x4ff7f 00:09:14.422 Nvme1n1p2 : 5.06 1364.83 5.33 0.00 0.00 93210.88 27405.96 90558.84 00:09:14.422 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:14.422 Nvme1n1p2 : 5.06 1367.05 5.34 0.00 0.00 93035.18 24546.21 84839.33 00:09:14.422 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x80000 00:09:14.422 Nvme2n1 : 5.07 1364.28 5.33 0.00 0.00 93053.36 28835.84 83886.08 00:09:14.422 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x80000 length 0x80000 00:09:14.422 Nvme2n1 : 5.08 1371.94 5.36 0.00 0.00 92493.91 9294.20 81026.33 00:09:14.422 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x80000 00:09:14.422 Nvme2n2 : 5.07 1363.70 5.33 0.00 0.00 92873.21 26929.34 87222.46 00:09:14.422 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x80000 length 0x80000 00:09:14.422 Nvme2n2 : 5.09 1371.42 5.36 0.00 0.00 92338.32 9234.62 78166.57 00:09:14.422 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x80000 00:09:14.422 Nvme2n3 : 5.09 1370.58 5.35 0.00 0.00 92221.53 9770.82 93418.59 00:09:14.422 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x80000 length 0x80000 00:09:14.422 Nvme2n3 : 5.10 1380.52 5.39 0.00 0.00 91753.24 9055.88 81026.33 00:09:14.422 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x0 length 0x20000 00:09:14.422 Nvme3n1 : 5.10 1379.75 5.39 0.00 0.00 91591.56 7745.16 94848.47 00:09:14.422 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.422 Verification LBA range: start 0x20000 length 0x20000 00:09:14.422 Nvme3n1 : 5.10 1380.01 5.39 0.00 0.00 91595.01 9413.35 84839.33 00:09:14.422 =================================================================================================================== 00:09:14.422 Total : 19192.76 74.97 0.00 0.00 92538.39 7745.16 95325.09 00:09:15.798 00:09:15.798 real 0m7.532s 00:09:15.798 user 0m13.845s 00:09:15.798 sys 0m0.239s 00:09:15.798 04:58:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.798 04:58:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 ************************************ 00:09:15.798 END TEST bdev_verify 00:09:15.798 ************************************ 00:09:15.798 04:58:30 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:15.798 04:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:15.798 04:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.798 04:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 ************************************ 00:09:15.798 START TEST bdev_verify_big_io 00:09:15.798 ************************************ 00:09:15.798 04:58:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:15.798 [2024-07-24 04:58:30.353041] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:15.798 [2024-07-24 04:58:30.353207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68269 ] 00:09:16.057 [2024-07-24 04:58:30.511079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.057 [2024-07-24 04:58:30.659732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.057 [2024-07-24 04:58:30.659750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.991 Running I/O for 5 seconds... 00:09:23.559 00:09:23.559 Latency(us) 00:09:23.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.559 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0xbd0b 00:09:23.559 Nvme0n1 : 5.78 110.07 6.88 0.00 0.00 1100465.88 20971.52 1166779.11 00:09:23.559 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:23.559 Nvme0n1 : 5.69 118.13 7.38 0.00 0.00 1042611.89 20971.52 1105771.05 00:09:23.559 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x4ff8 00:09:23.559 Nvme1n1p1 : 5.78 113.11 7.07 0.00 0.00 1060908.50 81026.33 1128649.08 00:09:23.559 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:23.559 Nvme1n1p1 : 5.78 115.40 7.21 0.00 0.00 1020481.54 71017.19 960876.92 00:09:23.559 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x4ff7 00:09:23.559 Nvme1n1p2 : 5.79 107.00 6.69 0.00 0.00 1084384.51 112483.61 1769233.69 00:09:23.559 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:23.559 Nvme1n1p2 : 5.78 121.74 7.61 0.00 0.00 962710.64 87699.08 892242.85 00:09:23.559 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x8000 00:09:23.559 Nvme2n1 : 5.94 110.92 6.93 0.00 0.00 1013325.97 65774.31 1784485.70 00:09:23.559 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x8000 length 0x8000 00:09:23.559 Nvme2n1 : 5.88 126.16 7.89 0.00 0.00 908914.95 51475.55 1105771.05 00:09:23.559 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x8000 00:09:23.559 Nvme2n2 : 5.94 115.41 7.21 0.00 0.00 955710.46 66727.56 1822615.74 00:09:23.559 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x8000 length 0x8000 00:09:23.559 Nvme2n2 : 5.88 130.51 8.16 0.00 0.00 860325.86 44564.48 953250.91 00:09:23.559 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x8000 00:09:23.559 Nvme2n3 : 6.03 125.11 7.82 0.00 0.00 859577.26 12273.11 1860745.77 00:09:23.559 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x8000 length 0x8000 00:09:23.559 Nvme2n3 : 5.94 133.43 8.34 0.00 0.00 814559.06 54096.99 983754.94 00:09:23.559 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x0 length 0x2000 00:09:23.559 Nvme3n1 : 6.03 136.60 8.54 0.00 0.00 768085.18 1325.61 1631965.56 00:09:23.559 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.559 Verification LBA range: start 0x2000 length 0x2000 00:09:23.559 Nvme3n1 : 6.00 149.34 9.33 0.00 0.00 712632.55 3738.53 1006632.96 00:09:23.560 =================================================================================================================== 00:09:23.560 Total : 1712.91 107.06 0.00 0.00 927937.37 1325.61 1860745.77 00:09:24.496 00:09:24.496 real 0m8.693s 00:09:24.496 user 0m16.167s 00:09:24.496 sys 0m0.263s 00:09:24.496 04:58:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.496 04:58:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:24.496 ************************************ 00:09:24.496 END TEST bdev_verify_big_io 00:09:24.496 ************************************ 00:09:24.496 04:58:39 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.496 04:58:39 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:24.496 04:58:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.496 04:58:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.496 ************************************ 00:09:24.496 START TEST bdev_write_zeroes 00:09:24.496 ************************************ 00:09:24.496 04:58:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.753 [2024-07-24 04:58:39.126751] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:24.753 [2024-07-24 04:58:39.126974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68376 ] 00:09:24.753 [2024-07-24 04:58:39.297986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.012 [2024-07-24 04:58:39.443240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.578 Running I/O for 1 seconds... 00:09:26.511 00:09:26.511 Latency(us) 00:09:26.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.511 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme0n1 : 1.02 7622.34 29.77 0.00 0.00 16735.56 7626.01 30146.56 00:09:26.511 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme1n1p1 : 1.03 7606.53 29.71 0.00 0.00 16734.68 12868.89 22997.18 00:09:26.511 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme1n1p2 : 1.03 7590.74 29.65 0.00 0.00 16711.92 11856.06 21686.46 00:09:26.511 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme2n1 : 1.03 7576.01 29.59 0.00 0.00 16680.30 10545.34 21567.30 00:09:26.511 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme2n2 : 1.03 7561.31 29.54 0.00 0.00 16684.49 10426.18 21090.68 00:09:26.511 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme2n3 : 1.03 7547.08 29.48 0.00 0.00 16662.66 8698.41 21209.83 00:09:26.511 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.511 Nvme3n1 : 1.04 7533.02 29.43 0.00 0.00 16660.67 8400.52 21567.30 00:09:26.511 =================================================================================================================== 00:09:26.511 Total : 53037.02 207.18 0.00 0.00 16695.75 7626.01 30146.56 00:09:27.887 00:09:27.887 real 0m3.149s 00:09:27.887 user 0m2.793s 00:09:27.887 sys 0m0.229s 00:09:27.887 04:58:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.887 04:58:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:27.887 ************************************ 00:09:27.887 END TEST bdev_write_zeroes 00:09:27.887 ************************************ 00:09:27.887 04:58:42 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.887 04:58:42 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:27.887 04:58:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.887 04:58:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.887 ************************************ 00:09:27.887 START TEST bdev_json_nonenclosed 00:09:27.887 ************************************ 00:09:27.887 04:58:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.887 [2024-07-24 04:58:42.331341] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:27.887 [2024-07-24 04:58:42.331544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68434 ] 00:09:27.887 [2024-07-24 04:58:42.502730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.146 [2024-07-24 04:58:42.656829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.146 [2024-07-24 04:58:42.656998] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:28.146 [2024-07-24 04:58:42.657044] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:28.146 [2024-07-24 04:58:42.657081] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.405 00:09:28.405 real 0m0.769s 00:09:28.405 user 0m0.525s 00:09:28.405 sys 0m0.138s 00:09:28.405 04:58:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.405 04:58:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:28.405 ************************************ 00:09:28.405 END TEST bdev_json_nonenclosed 00:09:28.405 ************************************ 00:09:28.664 04:58:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:28.664 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:28.664 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.664 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:28.664 ************************************ 00:09:28.664 START TEST bdev_json_nonarray 00:09:28.664 ************************************ 00:09:28.664 04:58:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:28.664 [2024-07-24 04:58:43.143857] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:28.664 [2024-07-24 04:58:43.144069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68459 ] 00:09:28.921 [2024-07-24 04:58:43.295643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.921 [2024-07-24 04:58:43.469217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.921 [2024-07-24 04:58:43.469376] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:28.921 [2024-07-24 04:58:43.469404] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:28.921 [2024-07-24 04:58:43.469435] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.499 00:09:29.499 real 0m0.779s 00:09:29.499 user 0m0.561s 00:09:29.499 sys 0m0.112s 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:29.499 ************************************ 00:09:29.499 END TEST bdev_json_nonarray 00:09:29.499 ************************************ 00:09:29.499 04:58:43 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:29.499 04:58:43 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:29.499 04:58:43 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:29.499 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.499 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.499 04:58:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:29.499 ************************************ 00:09:29.499 START TEST bdev_gpt_uuid 00:09:29.499 ************************************ 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68490 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68490 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 68490 ']' 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.499 04:58:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.499 [2024-07-24 04:58:44.013336] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:29.499 [2024-07-24 04:58:44.013528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68490 ] 00:09:29.777 [2024-07-24 04:58:44.185257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.777 [2024-07-24 04:58:44.348707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.715 04:58:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.715 04:58:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:09:30.715 04:58:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:30.715 04:58:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.715 04:58:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.715 Some configs were skipped because the RPC state that can call them passed over. 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.715 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.974 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.974 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:30.974 { 00:09:30.974 "name": "Nvme1n1p1", 00:09:30.974 "aliases": [ 00:09:30.974 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:30.974 ], 00:09:30.974 "product_name": "GPT Disk", 00:09:30.974 "block_size": 4096, 00:09:30.974 "num_blocks": 655104, 00:09:30.974 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.974 "assigned_rate_limits": { 00:09:30.974 "rw_ios_per_sec": 0, 00:09:30.974 "rw_mbytes_per_sec": 0, 00:09:30.974 "r_mbytes_per_sec": 0, 00:09:30.974 "w_mbytes_per_sec": 0 00:09:30.974 }, 00:09:30.974 "claimed": false, 00:09:30.974 "zoned": false, 00:09:30.974 "supported_io_types": { 00:09:30.974 "read": true, 00:09:30.974 "write": true, 00:09:30.974 "unmap": true, 00:09:30.974 "flush": true, 00:09:30.974 "reset": true, 00:09:30.974 "nvme_admin": false, 00:09:30.974 "nvme_io": false, 00:09:30.974 "nvme_io_md": false, 00:09:30.974 "write_zeroes": true, 00:09:30.974 "zcopy": false, 00:09:30.974 "get_zone_info": false, 00:09:30.974 "zone_management": false, 00:09:30.974 "zone_append": false, 00:09:30.975 "compare": true, 00:09:30.975 "compare_and_write": false, 00:09:30.975 "abort": true, 00:09:30.975 "seek_hole": false, 00:09:30.975 "seek_data": false, 00:09:30.975 "copy": true, 00:09:30.975 "nvme_iov_md": false 00:09:30.975 }, 00:09:30.975 "driver_specific": { 00:09:30.975 "gpt": { 00:09:30.975 "base_bdev": "Nvme1n1", 00:09:30.975 "offset_blocks": 256, 00:09:30.975 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:30.975 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.975 "partition_name": "SPDK_TEST_first" 00:09:30.975 } 00:09:30.975 } 00:09:30.975 } 00:09:30.975 ]' 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:30.975 { 00:09:30.975 "name": "Nvme1n1p2", 00:09:30.975 "aliases": [ 00:09:30.975 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:30.975 ], 00:09:30.975 "product_name": "GPT Disk", 00:09:30.975 "block_size": 4096, 00:09:30.975 "num_blocks": 655103, 00:09:30.975 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.975 "assigned_rate_limits": { 00:09:30.975 "rw_ios_per_sec": 0, 00:09:30.975 "rw_mbytes_per_sec": 0, 00:09:30.975 "r_mbytes_per_sec": 0, 00:09:30.975 "w_mbytes_per_sec": 0 00:09:30.975 }, 00:09:30.975 "claimed": false, 00:09:30.975 "zoned": false, 00:09:30.975 "supported_io_types": { 00:09:30.975 "read": true, 00:09:30.975 "write": true, 00:09:30.975 "unmap": true, 00:09:30.975 "flush": true, 00:09:30.975 "reset": true, 00:09:30.975 "nvme_admin": false, 00:09:30.975 "nvme_io": false, 00:09:30.975 "nvme_io_md": false, 00:09:30.975 "write_zeroes": true, 00:09:30.975 "zcopy": false, 00:09:30.975 "get_zone_info": false, 00:09:30.975 "zone_management": false, 00:09:30.975 "zone_append": false, 00:09:30.975 "compare": true, 00:09:30.975 "compare_and_write": false, 00:09:30.975 "abort": true, 00:09:30.975 "seek_hole": false, 00:09:30.975 "seek_data": false, 00:09:30.975 "copy": true, 00:09:30.975 "nvme_iov_md": false 00:09:30.975 }, 00:09:30.975 "driver_specific": { 00:09:30.975 "gpt": { 00:09:30.975 "base_bdev": "Nvme1n1", 00:09:30.975 "offset_blocks": 655360, 00:09:30.975 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:30.975 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.975 "partition_name": "SPDK_TEST_second" 00:09:30.975 } 00:09:30.975 } 00:09:30.975 } 00:09:30.975 ]' 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:30.975 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 68490 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 68490 ']' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 68490 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68490 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.234 killing process with pid 68490 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68490' 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 68490 00:09:31.234 04:58:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 68490 00:09:33.140 00:09:33.140 real 0m3.608s 00:09:33.140 user 0m3.955s 00:09:33.140 sys 0m0.435s 00:09:33.140 04:58:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.140 04:58:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:33.140 ************************************ 00:09:33.140 END TEST bdev_gpt_uuid 00:09:33.140 ************************************ 00:09:33.140 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:33.141 04:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:33.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.659 Waiting for block devices as requested 00:09:33.659 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.659 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.659 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.918 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.190 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:39.190 04:58:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:39.190 04:58:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:39.190 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:39.190 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:39.190 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:39.190 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:39.190 04:58:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:39.190 ************************************ 00:09:39.190 END TEST blockdev_nvme_gpt 00:09:39.190 ************************************ 00:09:39.190 00:09:39.190 real 1m1.537s 00:09:39.190 user 1m18.189s 00:09:39.190 sys 0m9.300s 00:09:39.190 04:58:53 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.190 04:58:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 04:58:53 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:39.190 04:58:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:39.190 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.190 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 ************************************ 00:09:39.190 START TEST nvme 00:09:39.190 ************************************ 00:09:39.190 04:58:53 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:39.447 * Looking for test storage... 00:09:39.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.447 04:58:53 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.582 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.582 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.582 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.582 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.582 04:58:55 nvme -- nvme/nvme.sh@79 -- # uname 00:09:40.582 04:58:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:40.582 04:58:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:40.582 04:58:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1069 -- # stubpid=69124 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:40.582 Waiting for stub to ready for secondary processes... 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69124 ]] 00:09:40.582 04:58:55 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:40.582 [2024-07-24 04:58:55.149691] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:40.582 [2024-07-24 04:58:55.149915] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:41.522 [2024-07-24 04:58:55.968429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.522 04:58:56 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:41.522 04:58:56 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69124 ]] 00:09:41.522 04:58:56 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:41.781 [2024-07-24 04:58:56.185811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.781 [2024-07-24 04:58:56.185898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.781 [2024-07-24 04:58:56.185908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.782 [2024-07-24 04:58:56.203645] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:41.782 [2024-07-24 04:58:56.203729] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:41.782 [2024-07-24 04:58:56.216147] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:41.782 [2024-07-24 04:58:56.216270] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:41.782 [2024-07-24 04:58:56.218530] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:41.782 [2024-07-24 04:58:56.218734] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:41.782 [2024-07-24 04:58:56.218817] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:41.782 [2024-07-24 04:58:56.221938] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:41.782 [2024-07-24 04:58:56.222187] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:41.782 [2024-07-24 04:58:56.222287] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:41.782 [2024-07-24 04:58:56.225516] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:41.782 [2024-07-24 04:58:56.225759] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:41.782 [2024-07-24 04:58:56.225897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:41.782 [2024-07-24 04:58:56.225980] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:41.782 [2024-07-24 04:58:56.226054] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:42.719 04:58:57 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:42.719 done. 00:09:42.719 04:58:57 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:09:42.719 04:58:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:42.719 04:58:57 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:42.719 04:58:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.719 04:58:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.719 ************************************ 00:09:42.719 START TEST nvme_reset 00:09:42.719 ************************************ 00:09:42.719 04:58:57 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:42.978 Initializing NVMe Controllers 00:09:42.978 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:42.978 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:42.978 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:42.978 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:42.978 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:42.978 00:09:42.978 real 0m0.252s 00:09:42.978 user 0m0.097s 00:09:42.978 sys 0m0.113s 00:09:42.978 04:58:57 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.978 04:58:57 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:42.978 ************************************ 00:09:42.978 END TEST nvme_reset 00:09:42.978 ************************************ 00:09:42.978 04:58:57 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:42.978 04:58:57 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:42.978 04:58:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.978 04:58:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.978 ************************************ 00:09:42.978 START TEST nvme_identify 00:09:42.978 ************************************ 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:09:42.978 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:42.978 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:42.978 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:42.978 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # bdfs=() 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # local bdfs 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:09:42.978 04:58:57 nvme.nvme_identify -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.978 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:43.241 [2024-07-24 04:58:57.722898] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69157 terminated unexpected 00:09:43.241 ===================================================== 00:09:43.241 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.241 ===================================================== 00:09:43.241 Controller Capabilities/Features 00:09:43.241 ================================ 00:09:43.241 Vendor ID: 1b36 00:09:43.241 Subsystem Vendor ID: 1af4 00:09:43.241 Serial Number: 12340 00:09:43.241 Model Number: QEMU NVMe Ctrl 00:09:43.241 Firmware Version: 8.0.0 00:09:43.241 Recommended Arb Burst: 6 00:09:43.241 IEEE OUI Identifier: 00 54 52 00:09:43.241 Multi-path I/O 00:09:43.241 May have multiple subsystem ports: No 00:09:43.241 May have multiple controllers: No 00:09:43.241 Associated with SR-IOV VF: No 00:09:43.241 Max Data Transfer Size: 524288 00:09:43.241 Max Number of Namespaces: 256 00:09:43.241 Max Number of I/O Queues: 64 00:09:43.241 NVMe Specification Version (VS): 1.4 00:09:43.241 NVMe Specification Version (Identify): 1.4 00:09:43.241 Maximum Queue Entries: 2048 00:09:43.241 Contiguous Queues Required: Yes 00:09:43.241 Arbitration Mechanisms Supported 00:09:43.241 Weighted Round Robin: Not Supported 00:09:43.241 Vendor Specific: Not Supported 00:09:43.241 Reset Timeout: 7500 ms 00:09:43.241 Doorbell Stride: 4 bytes 00:09:43.241 NVM Subsystem Reset: Not Supported 00:09:43.241 Command Sets Supported 00:09:43.241 NVM Command Set: Supported 00:09:43.241 Boot Partition: Not Supported 00:09:43.241 Memory Page Size Minimum: 4096 bytes 00:09:43.241 Memory Page Size Maximum: 65536 bytes 00:09:43.241 Persistent Memory Region: Not Supported 00:09:43.241 Optional Asynchronous Events Supported 00:09:43.241 Namespace Attribute Notices: Supported 00:09:43.241 Firmware Activation Notices: Not Supported 00:09:43.241 ANA Change Notices: Not Supported 00:09:43.241 PLE Aggregate Log Change Notices: Not Supported 00:09:43.241 LBA Status Info Alert Notices: Not Supported 00:09:43.241 EGE Aggregate Log Change Notices: Not Supported 00:09:43.241 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.241 Zone Descriptor Change Notices: Not Supported 00:09:43.241 Discovery Log Change Notices: Not Supported 00:09:43.241 Controller Attributes 00:09:43.241 128-bit Host Identifier: Not Supported 00:09:43.241 Non-Operational Permissive Mode: Not Supported 00:09:43.241 NVM Sets: Not Supported 00:09:43.241 Read Recovery Levels: Not Supported 00:09:43.241 Endurance Groups: Not Supported 00:09:43.241 Predictable Latency Mode: Not Supported 00:09:43.241 Traffic Based Keep ALive: Not Supported 00:09:43.241 Namespace Granularity: Not Supported 00:09:43.241 SQ Associations: Not Supported 00:09:43.241 UUID List: Not Supported 00:09:43.241 Multi-Domain Subsystem: Not Supported 00:09:43.241 Fixed Capacity Management: Not Supported 00:09:43.241 Variable Capacity Management: Not Supported 00:09:43.241 Delete Endurance Group: Not Supported 00:09:43.241 Delete NVM Set: Not Supported 00:09:43.241 Extended LBA Formats Supported: Supported 00:09:43.241 Flexible Data Placement Supported: Not Supported 00:09:43.241 00:09:43.241 Controller Memory Buffer Support 00:09:43.241 ================================ 00:09:43.241 Supported: No 00:09:43.241 00:09:43.241 Persistent Memory Region Support 00:09:43.241 ================================ 00:09:43.241 Supported: No 00:09:43.241 00:09:43.241 Admin Command Set Attributes 00:09:43.241 ============================ 00:09:43.241 Security Send/Receive: Not Supported 00:09:43.241 Format NVM: Supported 00:09:43.241 Firmware Activate/Download: Not Supported 00:09:43.241 Namespace Management: Supported 00:09:43.241 Device Self-Test: Not Supported 00:09:43.241 Directives: Supported 00:09:43.241 NVMe-MI: Not Supported 00:09:43.241 Virtualization Management: Not Supported 00:09:43.241 Doorbell Buffer Config: Supported 00:09:43.241 Get LBA Status Capability: Not Supported 00:09:43.241 Command & Feature Lockdown Capability: Not Supported 00:09:43.241 Abort Command Limit: 4 00:09:43.241 Async Event Request Limit: 4 00:09:43.241 Number of Firmware Slots: N/A 00:09:43.241 Firmware Slot 1 Read-Only: N/A 00:09:43.241 Firmware Activation Without Reset: N/A 00:09:43.241 Multiple Update Detection Support: N/A 00:09:43.241 Firmware Update Granularity: No Information Provided 00:09:43.241 Per-Namespace SMART Log: Yes 00:09:43.241 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.241 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:43.241 Command Effects Log Page: Supported 00:09:43.241 Get Log Page Extended Data: Supported 00:09:43.241 Telemetry Log Pages: Not Supported 00:09:43.241 Persistent Event Log Pages: Not Supported 00:09:43.241 Supported Log Pages Log Page: May Support 00:09:43.241 Commands Supported & Effects Log Page: Not Supported 00:09:43.241 Feature Identifiers & Effects Log Page:May Support 00:09:43.241 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.241 Data Area 4 for Telemetry Log: Not Supported 00:09:43.241 Error Log Page Entries Supported: 1 00:09:43.241 Keep Alive: Not Supported 00:09:43.241 00:09:43.241 NVM Command Set Attributes 00:09:43.241 ========================== 00:09:43.241 Submission Queue Entry Size 00:09:43.241 Max: 64 00:09:43.241 Min: 64 00:09:43.241 Completion Queue Entry Size 00:09:43.241 Max: 16 00:09:43.241 Min: 16 00:09:43.241 Number of Namespaces: 256 00:09:43.241 Compare Command: Supported 00:09:43.241 Write Uncorrectable Command: Not Supported 00:09:43.241 Dataset Management Command: Supported 00:09:43.241 Write Zeroes Command: Supported 00:09:43.241 Set Features Save Field: Supported 00:09:43.241 Reservations: Not Supported 00:09:43.241 Timestamp: Supported 00:09:43.241 Copy: Supported 00:09:43.241 Volatile Write Cache: Present 00:09:43.241 Atomic Write Unit (Normal): 1 00:09:43.241 Atomic Write Unit (PFail): 1 00:09:43.241 Atomic Compare & Write Unit: 1 00:09:43.241 Fused Compare & Write: Not Supported 00:09:43.241 Scatter-Gather List 00:09:43.241 SGL Command Set: Supported 00:09:43.241 SGL Keyed: Not Supported 00:09:43.241 SGL Bit Bucket Descriptor: Not Supported 00:09:43.241 SGL Metadata Pointer: Not Supported 00:09:43.241 Oversized SGL: Not Supported 00:09:43.241 SGL Metadata Address: Not Supported 00:09:43.241 SGL Offset: Not Supported 00:09:43.241 Transport SGL Data Block: Not Supported 00:09:43.241 Replay Protected Memory Block: Not Supported 00:09:43.241 00:09:43.241 Firmware Slot Information 00:09:43.241 ========================= 00:09:43.241 Active slot: 1 00:09:43.241 Slot 1 Firmware Revision: 1.0 00:09:43.241 00:09:43.241 00:09:43.241 Commands Supported and Effects 00:09:43.241 ============================== 00:09:43.241 Admin Commands 00:09:43.241 -------------- 00:09:43.241 Delete I/O Submission Queue (00h): Supported 00:09:43.241 Create I/O Submission Queue (01h): Supported 00:09:43.241 Get Log Page (02h): Supported 00:09:43.241 Delete I/O Completion Queue (04h): Supported 00:09:43.241 Create I/O Completion Queue (05h): Supported 00:09:43.241 Identify (06h): Supported 00:09:43.241 Abort (08h): Supported 00:09:43.241 Set Features (09h): Supported 00:09:43.241 Get Features (0Ah): Supported 00:09:43.241 Asynchronous Event Request (0Ch): Supported 00:09:43.241 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.241 Directive Send (19h): Supported 00:09:43.241 Directive Receive (1Ah): Supported 00:09:43.241 Virtualization Management (1Ch): Supported 00:09:43.241 Doorbell Buffer Config (7Ch): Supported 00:09:43.241 Format NVM (80h): Supported LBA-Change 00:09:43.241 I/O Commands 00:09:43.241 ------------ 00:09:43.241 Flush (00h): Supported LBA-Change 00:09:43.241 Write (01h): Supported LBA-Change 00:09:43.241 Read (02h): Supported 00:09:43.241 Compare (05h): Supported 00:09:43.241 Write Zeroes (08h): Supported LBA-Change 00:09:43.241 Dataset Management (09h): Supported LBA-Change 00:09:43.241 Unknown (0Ch): Supported 00:09:43.241 Unknown (12h): Supported 00:09:43.241 Copy (19h): Supported LBA-Change 00:09:43.241 Unknown (1Dh): Supported LBA-Change 00:09:43.241 00:09:43.241 Error Log 00:09:43.241 ========= 00:09:43.241 00:09:43.241 Arbitration 00:09:43.241 =========== 00:09:43.241 Arbitration Burst: no limit 00:09:43.241 00:09:43.241 Power Management 00:09:43.241 ================ 00:09:43.241 Number of Power States: 1 00:09:43.241 Current Power State: Power State #0 00:09:43.241 Power State #0: 00:09:43.241 Max Power: 25.00 W 00:09:43.242 Non-Operational State: Operational 00:09:43.242 Entry Latency: 16 microseconds 00:09:43.242 Exit Latency: 4 microseconds 00:09:43.242 Relative Read Throughput: 0 00:09:43.242 Relative Read Latency: 0 00:09:43.242 Relative Write Throughput: 0 00:09:43.242 Relative Write Latency: 0 00:09:43.242 Idle Power[2024-07-24 04:58:57.724482] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69157 terminated unexpected 00:09:43.242 : Not Reported 00:09:43.242 Active Power: Not Reported 00:09:43.242 Non-Operational Permissive Mode: Not Supported 00:09:43.242 00:09:43.242 Health Information 00:09:43.242 ================== 00:09:43.242 Critical Warnings: 00:09:43.242 Available Spare Space: OK 00:09:43.242 Temperature: OK 00:09:43.242 Device Reliability: OK 00:09:43.242 Read Only: No 00:09:43.242 Volatile Memory Backup: OK 00:09:43.242 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.242 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.242 Available Spare: 0% 00:09:43.242 Available Spare Threshold: 0% 00:09:43.242 Life Percentage Used: 0% 00:09:43.242 Data Units Read: 700 00:09:43.242 Data Units Written: 591 00:09:43.242 Host Read Commands: 33788 00:09:43.242 Host Write Commands: 32826 00:09:43.242 Controller Busy Time: 0 minutes 00:09:43.242 Power Cycles: 0 00:09:43.242 Power On Hours: 0 hours 00:09:43.242 Unsafe Shutdowns: 0 00:09:43.242 Unrecoverable Media Errors: 0 00:09:43.242 Lifetime Error Log Entries: 0 00:09:43.242 Warning Temperature Time: 0 minutes 00:09:43.242 Critical Temperature Time: 0 minutes 00:09:43.242 00:09:43.242 Number of Queues 00:09:43.242 ================ 00:09:43.242 Number of I/O Submission Queues: 64 00:09:43.242 Number of I/O Completion Queues: 64 00:09:43.242 00:09:43.242 ZNS Specific Controller Data 00:09:43.242 ============================ 00:09:43.242 Zone Append Size Limit: 0 00:09:43.242 00:09:43.242 00:09:43.242 Active Namespaces 00:09:43.242 ================= 00:09:43.242 Namespace ID:1 00:09:43.242 Error Recovery Timeout: Unlimited 00:09:43.242 Command Set Identifier: NVM (00h) 00:09:43.242 Deallocate: Supported 00:09:43.242 Deallocated/Unwritten Error: Supported 00:09:43.242 Deallocated Read Value: All 0x00 00:09:43.242 Deallocate in Write Zeroes: Not Supported 00:09:43.242 Deallocated Guard Field: 0xFFFF 00:09:43.242 Flush: Supported 00:09:43.242 Reservation: Not Supported 00:09:43.242 Metadata Transferred as: Separate Metadata Buffer 00:09:43.242 Namespace Sharing Capabilities: Private 00:09:43.242 Size (in LBAs): 1548666 (5GiB) 00:09:43.242 Capacity (in LBAs): 1548666 (5GiB) 00:09:43.242 Utilization (in LBAs): 1548666 (5GiB) 00:09:43.242 Thin Provisioning: Not Supported 00:09:43.242 Per-NS Atomic Units: No 00:09:43.242 Maximum Single Source Range Length: 128 00:09:43.242 Maximum Copy Length: 128 00:09:43.242 Maximum Source Range Count: 128 00:09:43.242 NGUID/EUI64 Never Reused: No 00:09:43.242 Namespace Write Protected: No 00:09:43.242 Number of LBA Formats: 8 00:09:43.242 Current LBA Format: LBA Format #07 00:09:43.242 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.242 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.242 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.242 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.242 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.242 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.242 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.242 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.242 00:09:43.242 NVM Specific Namespace Data 00:09:43.242 =========================== 00:09:43.242 Logical Block Storage Tag Mask: 0 00:09:43.242 Protection Information Capabilities: 00:09:43.242 16b Guard Protection Information Storage Tag Support: No 00:09:43.242 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.242 Storage Tag Check Read Support: No 00:09:43.242 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.242 ===================================================== 00:09:43.242 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.242 ===================================================== 00:09:43.242 Controller Capabilities/Features 00:09:43.242 ================================ 00:09:43.242 Vendor ID: 1b36 00:09:43.242 Subsystem Vendor ID: 1af4 00:09:43.242 Serial Number: 12341 00:09:43.242 Model Number: QEMU NVMe Ctrl 00:09:43.242 Firmware Version: 8.0.0 00:09:43.242 Recommended Arb Burst: 6 00:09:43.242 IEEE OUI Identifier: 00 54 52 00:09:43.242 Multi-path I/O 00:09:43.242 May have multiple subsystem ports: No 00:09:43.242 May have multiple controllers: No 00:09:43.242 Associated with SR-IOV VF: No 00:09:43.242 Max Data Transfer Size: 524288 00:09:43.242 Max Number of Namespaces: 256 00:09:43.242 Max Number of I/O Queues: 64 00:09:43.242 NVMe Specification Version (VS): 1.4 00:09:43.242 NVMe Specification Version (Identify): 1.4 00:09:43.242 Maximum Queue Entries: 2048 00:09:43.242 Contiguous Queues Required: Yes 00:09:43.242 Arbitration Mechanisms Supported 00:09:43.242 Weighted Round Robin: Not Supported 00:09:43.242 Vendor Specific: Not Supported 00:09:43.242 Reset Timeout: 7500 ms 00:09:43.242 Doorbell Stride: 4 bytes 00:09:43.242 NVM Subsystem Reset: Not Supported 00:09:43.242 Command Sets Supported 00:09:43.242 NVM Command Set: Supported 00:09:43.242 Boot Partition: Not Supported 00:09:43.242 Memory Page Size Minimum: 4096 bytes 00:09:43.242 Memory Page Size Maximum: 65536 bytes 00:09:43.242 Persistent Memory Region: Not Supported 00:09:43.242 Optional Asynchronous Events Supported 00:09:43.242 Namespace Attribute Notices: Supported 00:09:43.242 Firmware Activation Notices: Not Supported 00:09:43.242 ANA Change Notices: Not Supported 00:09:43.242 PLE Aggregate Log Change Notices: Not Supported 00:09:43.242 LBA Status Info Alert Notices: Not Supported 00:09:43.242 EGE Aggregate Log Change Notices: Not Supported 00:09:43.242 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.242 Zone Descriptor Change Notices: Not Supported 00:09:43.242 Discovery Log Change Notices: Not Supported 00:09:43.242 Controller Attributes 00:09:43.242 128-bit Host Identifier: Not Supported 00:09:43.242 Non-Operational Permissive Mode: Not Supported 00:09:43.242 NVM Sets: Not Supported 00:09:43.242 Read Recovery Levels: Not Supported 00:09:43.242 Endurance Groups: Not Supported 00:09:43.242 Predictable Latency Mode: Not Supported 00:09:43.242 Traffic Based Keep ALive: Not Supported 00:09:43.242 Namespace Granularity: Not Supported 00:09:43.242 SQ Associations: Not Supported 00:09:43.242 UUID List: Not Supported 00:09:43.242 Multi-Domain Subsystem: Not Supported 00:09:43.242 Fixed Capacity Management: Not Supported 00:09:43.242 Variable Capacity Management: Not Supported 00:09:43.242 Delete Endurance Group: Not Supported 00:09:43.242 Delete NVM Set: Not Supported 00:09:43.242 Extended LBA Formats Supported: Supported 00:09:43.242 Flexible Data Placement Supported: Not Supported 00:09:43.242 00:09:43.242 Controller Memory Buffer Support 00:09:43.242 ================================ 00:09:43.242 Supported: No 00:09:43.242 00:09:43.242 Persistent Memory Region Support 00:09:43.242 ================================ 00:09:43.242 Supported: No 00:09:43.242 00:09:43.242 Admin Command Set Attributes 00:09:43.242 ============================ 00:09:43.242 Security Send/Receive: Not Supported 00:09:43.242 Format NVM: Supported 00:09:43.242 Firmware Activate/Download: Not Supported 00:09:43.242 Namespace Management: Supported 00:09:43.242 Device Self-Test: Not Supported 00:09:43.242 Directives: Supported 00:09:43.242 NVMe-MI: Not Supported 00:09:43.242 Virtualization Management: Not Supported 00:09:43.242 Doorbell Buffer Config: Supported 00:09:43.242 Get LBA Status Capability: Not Supported 00:09:43.242 Command & Feature Lockdown Capability: Not Supported 00:09:43.242 Abort Command Limit: 4 00:09:43.242 Async Event Request Limit: 4 00:09:43.242 Number of Firmware Slots: N/A 00:09:43.242 Firmware Slot 1 Read-Only: N/A 00:09:43.242 Firmware Activation Without Reset: N/A 00:09:43.242 Multiple Update Detection Support: N/A 00:09:43.242 Firmware Update Granularity: No Information Provided 00:09:43.243 Per-Namespace SMART Log: Yes 00:09:43.243 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.243 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:43.243 Command Effects Log Page: Supported 00:09:43.243 Get Log Page Extended Data: Supported 00:09:43.243 Telemetry Log Pages: Not Supported 00:09:43.243 Persistent Event Log Pages: Not Supported 00:09:43.243 Supported Log Pages Log Page: May Support 00:09:43.243 Commands Supported & Effects Log Page: Not Supported 00:09:43.243 Feature Identifiers & Effects Log Page:May Support 00:09:43.243 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.243 Data Area 4 for Telemetry Log: Not Supported 00:09:43.243 Error Log Page Entries Supported: 1 00:09:43.243 Keep Alive: Not Supported 00:09:43.243 00:09:43.243 NVM Command Set Attributes 00:09:43.243 ========================== 00:09:43.243 Submission Queue Entry Size 00:09:43.243 Max: 64 00:09:43.243 Min: 64 00:09:43.243 Completion Queue Entry Size 00:09:43.243 Max: 16 00:09:43.243 Min: 16 00:09:43.243 Number of Namespaces: 256 00:09:43.243 Compare Command: Supported 00:09:43.243 Write Uncorrectable Command: Not Supported 00:09:43.243 Dataset Management Command: Supported 00:09:43.243 Write Zeroes Command: Supported 00:09:43.243 Set Features Save Field: Supported 00:09:43.243 Reservations: Not Supported 00:09:43.243 Timestamp: Supported 00:09:43.243 Copy: Supported 00:09:43.243 Volatile Write Cache: Present 00:09:43.243 Atomic Write Unit (Normal): 1 00:09:43.243 Atomic Write Unit (PFail): 1 00:09:43.243 Atomic Compare & Write Unit: 1 00:09:43.243 Fused Compare & Write: Not Supported 00:09:43.243 Scatter-Gather List 00:09:43.243 SGL Command Set: Supported 00:09:43.243 SGL Keyed: Not Supported 00:09:43.243 SGL Bit Bucket Descriptor: Not Supported 00:09:43.243 SGL Metadata Pointer: Not Supported 00:09:43.243 Oversized SGL: Not Supported 00:09:43.243 SGL Metadata Address: Not Supported 00:09:43.243 SGL Offset: Not Supported 00:09:43.243 Transport SGL Data Block: Not Supported 00:09:43.243 Replay Protected Memory Block: Not Supported 00:09:43.243 00:09:43.243 Firmware Slot Information 00:09:43.243 ========================= 00:09:43.243 Active slot: 1 00:09:43.243 Slot 1 Firmware Revision: 1.0 00:09:43.243 00:09:43.243 00:09:43.243 Commands Supported and Effects 00:09:43.243 ============================== 00:09:43.243 Admin Commands 00:09:43.243 -------------- 00:09:43.243 Delete I/O Submission Queue (00h): Supported 00:09:43.243 Create I/O Submission Queue (01h): Supported 00:09:43.243 Get Log Page (02h): Supported 00:09:43.243 Delete I/O Completion Queue (04h): Supported 00:09:43.243 Create I/O Completion Queue (05h): Supported 00:09:43.243 Identify (06h): Supported 00:09:43.243 Abort (08h): Supported 00:09:43.243 Set Features (09h): Supported 00:09:43.243 Get Features (0Ah): Supported 00:09:43.243 Asynchronous Event Request (0Ch): Supported 00:09:43.243 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.243 Directive Send (19h): Supported 00:09:43.243 Directive Receive (1Ah): Supported 00:09:43.243 Virtualization Management (1Ch): Supported 00:09:43.243 Doorbell Buffer Config (7Ch): Supported 00:09:43.243 Format NVM (80h): Supported LBA-Change 00:09:43.243 I/O Commands 00:09:43.243 ------------ 00:09:43.243 Flush (00h): Supported LBA-Change 00:09:43.243 Write (01h): Supported LBA-Change 00:09:43.243 Read (02h): Supported 00:09:43.243 Compare (05h): Supported 00:09:43.243 Write Zeroes (08h): Supported LBA-Change 00:09:43.243 Dataset Management (09h): Supported LBA-Change 00:09:43.243 Unknown (0Ch): Supported 00:09:43.243 Unknown (12h): Supported 00:09:43.243 Copy (19h): Supported LBA-Change 00:09:43.243 Unknown (1Dh): Supported LBA-Change 00:09:43.243 00:09:43.243 Error Log 00:09:43.243 ========= 00:09:43.243 00:09:43.243 Arbitration 00:09:43.243 =========== 00:09:43.243 Arbitration Burst: no limit 00:09:43.243 00:09:43.243 Power Management 00:09:43.243 ================ 00:09:43.243 Number of Power States: 1 00:09:43.243 Current Power State: Power State #0 00:09:43.243 Power State #0: 00:09:43.243 Max Power: 25.00 W 00:09:43.243 Non-Operational State: Operational 00:09:43.243 Entry Latency: 16 microseconds 00:09:43.243 Exit Latency: 4 microseconds 00:09:43.243 Relative Read Throughput: 0 00:09:43.243 Relative Read Latency: 0 00:09:43.243 Relative Write Throughput: 0 00:09:43.243 Relative Write Latency: 0 00:09:43.243 Idle Power: Not Reported 00:09:43.243 Active Power: Not Reported 00:09:43.243 Non-Operational Permissive Mode: Not Supported 00:09:43.243 00:09:43.243 Health Information 00:09:43.243 ================== 00:09:43.243 Critical Warnings: 00:09:43.243 Available Spare Space: OK 00:09:43.243 Temperature: [2024-07-24 04:58:57.725713] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69157 terminated unexpected 00:09:43.243 OK 00:09:43.243 Device Reliability: OK 00:09:43.243 Read Only: No 00:09:43.243 Volatile Memory Backup: OK 00:09:43.243 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.243 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.243 Available Spare: 0% 00:09:43.243 Available Spare Threshold: 0% 00:09:43.243 Life Percentage Used: 0% 00:09:43.243 Data Units Read: 1112 00:09:43.243 Data Units Written: 890 00:09:43.243 Host Read Commands: 50967 00:09:43.243 Host Write Commands: 47937 00:09:43.243 Controller Busy Time: 0 minutes 00:09:43.243 Power Cycles: 0 00:09:43.243 Power On Hours: 0 hours 00:09:43.243 Unsafe Shutdowns: 0 00:09:43.243 Unrecoverable Media Errors: 0 00:09:43.243 Lifetime Error Log Entries: 0 00:09:43.243 Warning Temperature Time: 0 minutes 00:09:43.243 Critical Temperature Time: 0 minutes 00:09:43.243 00:09:43.243 Number of Queues 00:09:43.243 ================ 00:09:43.243 Number of I/O Submission Queues: 64 00:09:43.243 Number of I/O Completion Queues: 64 00:09:43.243 00:09:43.243 ZNS Specific Controller Data 00:09:43.243 ============================ 00:09:43.243 Zone Append Size Limit: 0 00:09:43.243 00:09:43.243 00:09:43.243 Active Namespaces 00:09:43.243 ================= 00:09:43.243 Namespace ID:1 00:09:43.243 Error Recovery Timeout: Unlimited 00:09:43.243 Command Set Identifier: NVM (00h) 00:09:43.243 Deallocate: Supported 00:09:43.243 Deallocated/Unwritten Error: Supported 00:09:43.243 Deallocated Read Value: All 0x00 00:09:43.243 Deallocate in Write Zeroes: Not Supported 00:09:43.243 Deallocated Guard Field: 0xFFFF 00:09:43.243 Flush: Supported 00:09:43.243 Reservation: Not Supported 00:09:43.243 Namespace Sharing Capabilities: Private 00:09:43.243 Size (in LBAs): 1310720 (5GiB) 00:09:43.243 Capacity (in LBAs): 1310720 (5GiB) 00:09:43.243 Utilization (in LBAs): 1310720 (5GiB) 00:09:43.243 Thin Provisioning: Not Supported 00:09:43.243 Per-NS Atomic Units: No 00:09:43.243 Maximum Single Source Range Length: 128 00:09:43.243 Maximum Copy Length: 128 00:09:43.243 Maximum Source Range Count: 128 00:09:43.243 NGUID/EUI64 Never Reused: No 00:09:43.243 Namespace Write Protected: No 00:09:43.243 Number of LBA Formats: 8 00:09:43.243 Current LBA Format: LBA Format #04 00:09:43.243 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.243 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.243 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.243 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.243 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.243 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.243 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.243 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.243 00:09:43.243 NVM Specific Namespace Data 00:09:43.243 =========================== 00:09:43.243 Logical Block Storage Tag Mask: 0 00:09:43.243 Protection Information Capabilities: 00:09:43.243 16b Guard Protection Information Storage Tag Support: No 00:09:43.243 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.243 Storage Tag Check Read Support: No 00:09:43.243 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.243 ===================================================== 00:09:43.243 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:43.243 ===================================================== 00:09:43.243 Controller Capabilities/Features 00:09:43.244 ================================ 00:09:43.244 Vendor ID: 1b36 00:09:43.244 Subsystem Vendor ID: 1af4 00:09:43.244 Serial Number: 12343 00:09:43.244 Model Number: QEMU NVMe Ctrl 00:09:43.244 Firmware Version: 8.0.0 00:09:43.244 Recommended Arb Burst: 6 00:09:43.244 IEEE OUI Identifier: 00 54 52 00:09:43.244 Multi-path I/O 00:09:43.244 May have multiple subsystem ports: No 00:09:43.244 May have multiple controllers: Yes 00:09:43.244 Associated with SR-IOV VF: No 00:09:43.244 Max Data Transfer Size: 524288 00:09:43.244 Max Number of Namespaces: 256 00:09:43.244 Max Number of I/O Queues: 64 00:09:43.244 NVMe Specification Version (VS): 1.4 00:09:43.244 NVMe Specification Version (Identify): 1.4 00:09:43.244 Maximum Queue Entries: 2048 00:09:43.244 Contiguous Queues Required: Yes 00:09:43.244 Arbitration Mechanisms Supported 00:09:43.244 Weighted Round Robin: Not Supported 00:09:43.244 Vendor Specific: Not Supported 00:09:43.244 Reset Timeout: 7500 ms 00:09:43.244 Doorbell Stride: 4 bytes 00:09:43.244 NVM Subsystem Reset: Not Supported 00:09:43.244 Command Sets Supported 00:09:43.244 NVM Command Set: Supported 00:09:43.244 Boot Partition: Not Supported 00:09:43.244 Memory Page Size Minimum: 4096 bytes 00:09:43.244 Memory Page Size Maximum: 65536 bytes 00:09:43.244 Persistent Memory Region: Not Supported 00:09:43.244 Optional Asynchronous Events Supported 00:09:43.244 Namespace Attribute Notices: Supported 00:09:43.244 Firmware Activation Notices: Not Supported 00:09:43.244 ANA Change Notices: Not Supported 00:09:43.244 PLE Aggregate Log Change Notices: Not Supported 00:09:43.244 LBA Status Info Alert Notices: Not Supported 00:09:43.244 EGE Aggregate Log Change Notices: Not Supported 00:09:43.244 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.244 Zone Descriptor Change Notices: Not Supported 00:09:43.244 Discovery Log Change Notices: Not Supported 00:09:43.244 Controller Attributes 00:09:43.244 128-bit Host Identifier: Not Supported 00:09:43.244 Non-Operational Permissive Mode: Not Supported 00:09:43.244 NVM Sets: Not Supported 00:09:43.244 Read Recovery Levels: Not Supported 00:09:43.244 Endurance Groups: Supported 00:09:43.244 Predictable Latency Mode: Not Supported 00:09:43.244 Traffic Based Keep ALive: Not Supported 00:09:43.244 Namespace Granularity: Not Supported 00:09:43.244 SQ Associations: Not Supported 00:09:43.244 UUID List: Not Supported 00:09:43.244 Multi-Domain Subsystem: Not Supported 00:09:43.244 Fixed Capacity Management: Not Supported 00:09:43.244 Variable Capacity Management: Not Supported 00:09:43.244 Delete Endurance Group: Not Supported 00:09:43.244 Delete NVM Set: Not Supported 00:09:43.244 Extended LBA Formats Supported: Supported 00:09:43.244 Flexible Data Placement Supported: Supported 00:09:43.244 00:09:43.244 Controller Memory Buffer Support 00:09:43.244 ================================ 00:09:43.244 Supported: No 00:09:43.244 00:09:43.244 Persistent Memory Region Support 00:09:43.244 ================================ 00:09:43.244 Supported: No 00:09:43.244 00:09:43.244 Admin Command Set Attributes 00:09:43.244 ============================ 00:09:43.244 Security Send/Receive: Not Supported 00:09:43.244 Format NVM: Supported 00:09:43.244 Firmware Activate/Download: Not Supported 00:09:43.244 Namespace Management: Supported 00:09:43.244 Device Self-Test: Not Supported 00:09:43.244 Directives: Supported 00:09:43.244 NVMe-MI: Not Supported 00:09:43.244 Virtualization Management: Not Supported 00:09:43.244 Doorbell Buffer Config: Supported 00:09:43.244 Get LBA Status Capability: Not Supported 00:09:43.244 Command & Feature Lockdown Capability: Not Supported 00:09:43.244 Abort Command Limit: 4 00:09:43.244 Async Event Request Limit: 4 00:09:43.244 Number of Firmware Slots: N/A 00:09:43.244 Firmware Slot 1 Read-Only: N/A 00:09:43.244 Firmware Activation Without Reset: N/A 00:09:43.244 Multiple Update Detection Support: N/A 00:09:43.244 Firmware Update Granularity: No Information Provided 00:09:43.244 Per-Namespace SMART Log: Yes 00:09:43.244 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.244 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:43.244 Command Effects Log Page: Supported 00:09:43.244 Get Log Page Extended Data: Supported 00:09:43.244 Telemetry Log Pages: Not Supported 00:09:43.244 Persistent Event Log Pages: Not Supported 00:09:43.244 Supported Log Pages Log Page: May Support 00:09:43.244 Commands Supported & Effects Log Page: Not Supported 00:09:43.244 Feature Identifiers & Effects Log Page:May Support 00:09:43.244 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.244 Data Area 4 for Telemetry Log: Not Supported 00:09:43.244 Error Log Page Entries Supported: 1 00:09:43.244 Keep Alive: Not Supported 00:09:43.244 00:09:43.244 NVM Command Set Attributes 00:09:43.244 ========================== 00:09:43.244 Submission Queue Entry Size 00:09:43.244 Max: 64 00:09:43.244 Min: 64 00:09:43.244 Completion Queue Entry Size 00:09:43.244 Max: 16 00:09:43.244 Min: 16 00:09:43.244 Number of Namespaces: 256 00:09:43.244 Compare Command: Supported 00:09:43.244 Write Uncorrectable Command: Not Supported 00:09:43.244 Dataset Management Command: Supported 00:09:43.244 Write Zeroes Command: Supported 00:09:43.244 Set Features Save Field: Supported 00:09:43.244 Reservations: Not Supported 00:09:43.244 Timestamp: Supported 00:09:43.244 Copy: Supported 00:09:43.244 Volatile Write Cache: Present 00:09:43.244 Atomic Write Unit (Normal): 1 00:09:43.244 Atomic Write Unit (PFail): 1 00:09:43.244 Atomic Compare & Write Unit: 1 00:09:43.244 Fused Compare & Write: Not Supported 00:09:43.244 Scatter-Gather List 00:09:43.244 SGL Command Set: Supported 00:09:43.244 SGL Keyed: Not Supported 00:09:43.244 SGL Bit Bucket Descriptor: Not Supported 00:09:43.244 SGL Metadata Pointer: Not Supported 00:09:43.244 Oversized SGL: Not Supported 00:09:43.244 SGL Metadata Address: Not Supported 00:09:43.244 SGL Offset: Not Supported 00:09:43.244 Transport SGL Data Block: Not Supported 00:09:43.244 Replay Protected Memory Block: Not Supported 00:09:43.244 00:09:43.244 Firmware Slot Information 00:09:43.244 ========================= 00:09:43.244 Active slot: 1 00:09:43.244 Slot 1 Firmware Revision: 1.0 00:09:43.244 00:09:43.244 00:09:43.244 Commands Supported and Effects 00:09:43.244 ============================== 00:09:43.244 Admin Commands 00:09:43.244 -------------- 00:09:43.244 Delete I/O Submission Queue (00h): Supported 00:09:43.244 Create I/O Submission Queue (01h): Supported 00:09:43.244 Get Log Page (02h): Supported 00:09:43.244 Delete I/O Completion Queue (04h): Supported 00:09:43.244 Create I/O Completion Queue (05h): Supported 00:09:43.244 Identify (06h): Supported 00:09:43.244 Abort (08h): Supported 00:09:43.244 Set Features (09h): Supported 00:09:43.244 Get Features (0Ah): Supported 00:09:43.244 Asynchronous Event Request (0Ch): Supported 00:09:43.244 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.244 Directive Send (19h): Supported 00:09:43.244 Directive Receive (1Ah): Supported 00:09:43.244 Virtualization Management (1Ch): Supported 00:09:43.244 Doorbell Buffer Config (7Ch): Supported 00:09:43.244 Format NVM (80h): Supported LBA-Change 00:09:43.244 I/O Commands 00:09:43.244 ------------ 00:09:43.244 Flush (00h): Supported LBA-Change 00:09:43.244 Write (01h): Supported LBA-Change 00:09:43.244 Read (02h): Supported 00:09:43.244 Compare (05h): Supported 00:09:43.244 Write Zeroes (08h): Supported LBA-Change 00:09:43.244 Dataset Management (09h): Supported LBA-Change 00:09:43.244 Unknown (0Ch): Supported 00:09:43.244 Unknown (12h): Supported 00:09:43.244 Copy (19h): Supported LBA-Change 00:09:43.244 Unknown (1Dh): Supported LBA-Change 00:09:43.244 00:09:43.244 Error Log 00:09:43.244 ========= 00:09:43.244 00:09:43.244 Arbitration 00:09:43.244 =========== 00:09:43.244 Arbitration Burst: no limit 00:09:43.244 00:09:43.244 Power Management 00:09:43.244 ================ 00:09:43.244 Number of Power States: 1 00:09:43.244 Current Power State: Power State #0 00:09:43.244 Power State #0: 00:09:43.244 Max Power: 25.00 W 00:09:43.244 Non-Operational State: Operational 00:09:43.244 Entry Latency: 16 microseconds 00:09:43.244 Exit Latency: 4 microseconds 00:09:43.244 Relative Read Throughput: 0 00:09:43.244 Relative Read Latency: 0 00:09:43.244 Relative Write Throughput: 0 00:09:43.244 Relative Write Latency: 0 00:09:43.244 Idle Power: Not Reported 00:09:43.244 Active Power: Not Reported 00:09:43.244 Non-Operational Permissive Mode: Not Supported 00:09:43.244 00:09:43.244 Health Information 00:09:43.245 ================== 00:09:43.245 Critical Warnings: 00:09:43.245 Available Spare Space: OK 00:09:43.245 Temperature: OK 00:09:43.245 Device Reliability: OK 00:09:43.245 Read Only: No 00:09:43.245 Volatile Memory Backup: OK 00:09:43.245 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.245 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.245 Available Spare: 0% 00:09:43.245 Available Spare Threshold: 0% 00:09:43.245 Life Percentage Used: 0% 00:09:43.245 Data Units Read: 821 00:09:43.245 Data Units Written: 714 00:09:43.245 Host Read Commands: 35126 00:09:43.245 Host Write Commands: 33716 00:09:43.245 Controller Busy Time: 0 minutes 00:09:43.245 Power Cycles: 0 00:09:43.245 Power On Hours: 0 hours 00:09:43.245 Unsafe Shutdowns: 0 00:09:43.245 Unrecoverable Media Errors: 0 00:09:43.245 Lifetime Error Log Entries: 0 00:09:43.245 Warning Temperature Time: 0 minutes 00:09:43.245 Critical Temperature Time: 0 minutes 00:09:43.245 00:09:43.245 Number of Queues 00:09:43.245 ================ 00:09:43.245 Number of I/O Submission Queues: 64 00:09:43.245 Number of I/O Completion Queues: 64 00:09:43.245 00:09:43.245 ZNS Specific Controller Data 00:09:43.245 ============================ 00:09:43.245 Zone Append Size Limit: 0 00:09:43.245 00:09:43.245 00:09:43.245 Active Namespaces 00:09:43.245 ================= 00:09:43.245 Namespace ID:1 00:09:43.245 Error Recovery Timeout: Unlimited 00:09:43.245 Command Set Identifier: NVM (00h) 00:09:43.245 Deallocate: Supported 00:09:43.245 Deallocated/Unwritten Error: Supported 00:09:43.245 Deallocated Read Value: All 0x00 00:09:43.245 Deallocate in Write Zeroes: Not Supported 00:09:43.245 Deallocated Guard Field: 0xFFFF 00:09:43.245 Flush: Supported 00:09:43.245 Reservation: Not Supported 00:09:43.245 Namespace Sharing Capabilities: Multiple Controllers 00:09:43.245 Size (in LBAs): 262144 (1GiB) 00:09:43.245 Capacity (in LBAs): 262144 (1GiB) 00:09:43.245 Utilization (in LBAs): 262144 (1GiB) 00:09:43.245 Thin Provisioning: Not Supported 00:09:43.245 Per-NS Atomic Units: No 00:09:43.245 Maximum Single Source Range Length: 128 00:09:43.245 Maximum Copy Length: 128 00:09:43.245 Maximum Source Range Count: 128 00:09:43.245 NGUID/EUI64 Never Reused: No 00:09:43.245 Namespace Write Protected: No 00:09:43.245 Endurance group ID: 1 00:09:43.245 Number of LBA Formats: 8 00:09:43.245 Current LBA Format: LBA Format #04 00:09:43.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.245 00:09:43.245 Get Feature FDP: 00:09:43.245 ================ 00:09:43.245 Enabled: Yes 00:09:43.245 FDP configuration index: 0 00:09:43.245 00:09:43.245 FDP configurations log page 00:09:43.245 =========================== 00:09:43.245 Number of FDP configurations: 1 00:09:43.245 Version: 0 00:09:43.245 Size: 112 00:09:43.245 FDP Configuration Descriptor: 0 00:09:43.245 Descriptor Size: 96 00:09:43.245 Reclaim Group Identifier format: 2 00:09:43.245 FDP Volatile Write Cache: Not Present 00:09:43.245 FDP Configuration: Valid 00:09:43.245 Vendor Specific Size: 0 00:09:43.245 Number of Reclaim Groups: 2 00:09:43.245 Number of Recalim Unit Handles: 8 00:09:43.245 Max Placement Identifiers: 128 00:09:43.245 Number of Namespaces Suppprted: 256 00:09:43.245 Reclaim unit Nominal Size: 6000000 bytes 00:09:43.245 Estimated Reclaim Unit Time Limit: Not Reported 00:09:43.245 RUH Desc #000: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #001: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #002: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #003: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #004: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #005: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #006: RUH Type: Initially Isolated 00:09:43.245 RUH Desc #007: RUH Type: Initially Isolated 00:09:43.245 00:09:43.245 FDP reclaim unit handle usage log page 00:09:43.245 ====================================== 00:09:43.245 Number of Reclaim Unit Handles: 8 00:09:43.245 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:43.245 RUH Usage Desc #001: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #002: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #003: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #004: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #005: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #006: RUH Attributes: Unused 00:09:43.245 RUH Usage Desc #007: RUH Attributes: Unused 00:09:43.245 00:09:43.245 FDP statistics log page 00:09:43.245 ======================= 00:09:43.245 Host bytes with metadata written: 450994176 00:09:43.245 Medi[2024-07-24 04:58:57.728015] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69157 terminated unexpected 00:09:43.245 a bytes with metadata written: 451039232 00:09:43.245 Media bytes erased: 0 00:09:43.245 00:09:43.245 FDP events log page 00:09:43.245 =================== 00:09:43.245 Number of FDP events: 0 00:09:43.245 00:09:43.245 NVM Specific Namespace Data 00:09:43.245 =========================== 00:09:43.245 Logical Block Storage Tag Mask: 0 00:09:43.245 Protection Information Capabilities: 00:09:43.245 16b Guard Protection Information Storage Tag Support: No 00:09:43.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.245 Storage Tag Check Read Support: No 00:09:43.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.245 ===================================================== 00:09:43.245 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:43.245 ===================================================== 00:09:43.245 Controller Capabilities/Features 00:09:43.245 ================================ 00:09:43.245 Vendor ID: 1b36 00:09:43.245 Subsystem Vendor ID: 1af4 00:09:43.245 Serial Number: 12342 00:09:43.245 Model Number: QEMU NVMe Ctrl 00:09:43.245 Firmware Version: 8.0.0 00:09:43.245 Recommended Arb Burst: 6 00:09:43.245 IEEE OUI Identifier: 00 54 52 00:09:43.245 Multi-path I/O 00:09:43.245 May have multiple subsystem ports: No 00:09:43.245 May have multiple controllers: No 00:09:43.245 Associated with SR-IOV VF: No 00:09:43.245 Max Data Transfer Size: 524288 00:09:43.245 Max Number of Namespaces: 256 00:09:43.245 Max Number of I/O Queues: 64 00:09:43.245 NVMe Specification Version (VS): 1.4 00:09:43.245 NVMe Specification Version (Identify): 1.4 00:09:43.245 Maximum Queue Entries: 2048 00:09:43.245 Contiguous Queues Required: Yes 00:09:43.245 Arbitration Mechanisms Supported 00:09:43.245 Weighted Round Robin: Not Supported 00:09:43.245 Vendor Specific: Not Supported 00:09:43.245 Reset Timeout: 7500 ms 00:09:43.245 Doorbell Stride: 4 bytes 00:09:43.246 NVM Subsystem Reset: Not Supported 00:09:43.246 Command Sets Supported 00:09:43.246 NVM Command Set: Supported 00:09:43.246 Boot Partition: Not Supported 00:09:43.246 Memory Page Size Minimum: 4096 bytes 00:09:43.246 Memory Page Size Maximum: 65536 bytes 00:09:43.246 Persistent Memory Region: Not Supported 00:09:43.246 Optional Asynchronous Events Supported 00:09:43.246 Namespace Attribute Notices: Supported 00:09:43.246 Firmware Activation Notices: Not Supported 00:09:43.246 ANA Change Notices: Not Supported 00:09:43.246 PLE Aggregate Log Change Notices: Not Supported 00:09:43.246 LBA Status Info Alert Notices: Not Supported 00:09:43.246 EGE Aggregate Log Change Notices: Not Supported 00:09:43.246 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.246 Zone Descriptor Change Notices: Not Supported 00:09:43.246 Discovery Log Change Notices: Not Supported 00:09:43.246 Controller Attributes 00:09:43.246 128-bit Host Identifier: Not Supported 00:09:43.246 Non-Operational Permissive Mode: Not Supported 00:09:43.246 NVM Sets: Not Supported 00:09:43.246 Read Recovery Levels: Not Supported 00:09:43.246 Endurance Groups: Not Supported 00:09:43.246 Predictable Latency Mode: Not Supported 00:09:43.246 Traffic Based Keep ALive: Not Supported 00:09:43.246 Namespace Granularity: Not Supported 00:09:43.246 SQ Associations: Not Supported 00:09:43.246 UUID List: Not Supported 00:09:43.246 Multi-Domain Subsystem: Not Supported 00:09:43.246 Fixed Capacity Management: Not Supported 00:09:43.246 Variable Capacity Management: Not Supported 00:09:43.246 Delete Endurance Group: Not Supported 00:09:43.246 Delete NVM Set: Not Supported 00:09:43.246 Extended LBA Formats Supported: Supported 00:09:43.246 Flexible Data Placement Supported: Not Supported 00:09:43.246 00:09:43.246 Controller Memory Buffer Support 00:09:43.246 ================================ 00:09:43.246 Supported: No 00:09:43.246 00:09:43.246 Persistent Memory Region Support 00:09:43.246 ================================ 00:09:43.246 Supported: No 00:09:43.246 00:09:43.246 Admin Command Set Attributes 00:09:43.246 ============================ 00:09:43.246 Security Send/Receive: Not Supported 00:09:43.246 Format NVM: Supported 00:09:43.246 Firmware Activate/Download: Not Supported 00:09:43.246 Namespace Management: Supported 00:09:43.246 Device Self-Test: Not Supported 00:09:43.246 Directives: Supported 00:09:43.246 NVMe-MI: Not Supported 00:09:43.246 Virtualization Management: Not Supported 00:09:43.246 Doorbell Buffer Config: Supported 00:09:43.246 Get LBA Status Capability: Not Supported 00:09:43.246 Command & Feature Lockdown Capability: Not Supported 00:09:43.246 Abort Command Limit: 4 00:09:43.246 Async Event Request Limit: 4 00:09:43.246 Number of Firmware Slots: N/A 00:09:43.246 Firmware Slot 1 Read-Only: N/A 00:09:43.246 Firmware Activation Without Reset: N/A 00:09:43.246 Multiple Update Detection Support: N/A 00:09:43.246 Firmware Update Granularity: No Information Provided 00:09:43.246 Per-Namespace SMART Log: Yes 00:09:43.246 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.246 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:43.246 Command Effects Log Page: Supported 00:09:43.246 Get Log Page Extended Data: Supported 00:09:43.246 Telemetry Log Pages: Not Supported 00:09:43.246 Persistent Event Log Pages: Not Supported 00:09:43.246 Supported Log Pages Log Page: May Support 00:09:43.246 Commands Supported & Effects Log Page: Not Supported 00:09:43.246 Feature Identifiers & Effects Log Page:May Support 00:09:43.246 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.246 Data Area 4 for Telemetry Log: Not Supported 00:09:43.246 Error Log Page Entries Supported: 1 00:09:43.246 Keep Alive: Not Supported 00:09:43.246 00:09:43.246 NVM Command Set Attributes 00:09:43.246 ========================== 00:09:43.246 Submission Queue Entry Size 00:09:43.246 Max: 64 00:09:43.246 Min: 64 00:09:43.246 Completion Queue Entry Size 00:09:43.246 Max: 16 00:09:43.246 Min: 16 00:09:43.246 Number of Namespaces: 256 00:09:43.246 Compare Command: Supported 00:09:43.246 Write Uncorrectable Command: Not Supported 00:09:43.246 Dataset Management Command: Supported 00:09:43.246 Write Zeroes Command: Supported 00:09:43.246 Set Features Save Field: Supported 00:09:43.246 Reservations: Not Supported 00:09:43.246 Timestamp: Supported 00:09:43.246 Copy: Supported 00:09:43.246 Volatile Write Cache: Present 00:09:43.246 Atomic Write Unit (Normal): 1 00:09:43.246 Atomic Write Unit (PFail): 1 00:09:43.246 Atomic Compare & Write Unit: 1 00:09:43.246 Fused Compare & Write: Not Supported 00:09:43.246 Scatter-Gather List 00:09:43.246 SGL Command Set: Supported 00:09:43.246 SGL Keyed: Not Supported 00:09:43.246 SGL Bit Bucket Descriptor: Not Supported 00:09:43.246 SGL Metadata Pointer: Not Supported 00:09:43.246 Oversized SGL: Not Supported 00:09:43.246 SGL Metadata Address: Not Supported 00:09:43.246 SGL Offset: Not Supported 00:09:43.246 Transport SGL Data Block: Not Supported 00:09:43.246 Replay Protected Memory Block: Not Supported 00:09:43.246 00:09:43.246 Firmware Slot Information 00:09:43.246 ========================= 00:09:43.246 Active slot: 1 00:09:43.246 Slot 1 Firmware Revision: 1.0 00:09:43.246 00:09:43.246 00:09:43.246 Commands Supported and Effects 00:09:43.246 ============================== 00:09:43.246 Admin Commands 00:09:43.246 -------------- 00:09:43.246 Delete I/O Submission Queue (00h): Supported 00:09:43.246 Create I/O Submission Queue (01h): Supported 00:09:43.246 Get Log Page (02h): Supported 00:09:43.246 Delete I/O Completion Queue (04h): Supported 00:09:43.246 Create I/O Completion Queue (05h): Supported 00:09:43.246 Identify (06h): Supported 00:09:43.246 Abort (08h): Supported 00:09:43.246 Set Features (09h): Supported 00:09:43.246 Get Features (0Ah): Supported 00:09:43.246 Asynchronous Event Request (0Ch): Supported 00:09:43.246 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.246 Directive Send (19h): Supported 00:09:43.246 Directive Receive (1Ah): Supported 00:09:43.246 Virtualization Management (1Ch): Supported 00:09:43.246 Doorbell Buffer Config (7Ch): Supported 00:09:43.246 Format NVM (80h): Supported LBA-Change 00:09:43.246 I/O Commands 00:09:43.246 ------------ 00:09:43.246 Flush (00h): Supported LBA-Change 00:09:43.246 Write (01h): Supported LBA-Change 00:09:43.246 Read (02h): Supported 00:09:43.246 Compare (05h): Supported 00:09:43.246 Write Zeroes (08h): Supported LBA-Change 00:09:43.246 Dataset Management (09h): Supported LBA-Change 00:09:43.246 Unknown (0Ch): Supported 00:09:43.246 Unknown (12h): Supported 00:09:43.246 Copy (19h): Supported LBA-Change 00:09:43.246 Unknown (1Dh): Supported LBA-Change 00:09:43.246 00:09:43.246 Error Log 00:09:43.246 ========= 00:09:43.246 00:09:43.246 Arbitration 00:09:43.246 =========== 00:09:43.246 Arbitration Burst: no limit 00:09:43.246 00:09:43.246 Power Management 00:09:43.246 ================ 00:09:43.246 Number of Power States: 1 00:09:43.246 Current Power State: Power State #0 00:09:43.246 Power State #0: 00:09:43.246 Max Power: 25.00 W 00:09:43.246 Non-Operational State: Operational 00:09:43.246 Entry Latency: 16 microseconds 00:09:43.246 Exit Latency: 4 microseconds 00:09:43.246 Relative Read Throughput: 0 00:09:43.246 Relative Read Latency: 0 00:09:43.246 Relative Write Throughput: 0 00:09:43.246 Relative Write Latency: 0 00:09:43.246 Idle Power: Not Reported 00:09:43.246 Active Power: Not Reported 00:09:43.246 Non-Operational Permissive Mode: Not Supported 00:09:43.246 00:09:43.246 Health Information 00:09:43.246 ================== 00:09:43.246 Critical Warnings: 00:09:43.246 Available Spare Space: OK 00:09:43.246 Temperature: OK 00:09:43.246 Device Reliability: OK 00:09:43.246 Read Only: No 00:09:43.246 Volatile Memory Backup: OK 00:09:43.246 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.246 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.246 Available Spare: 0% 00:09:43.246 Available Spare Threshold: 0% 00:09:43.246 Life Percentage Used: 0% 00:09:43.246 Data Units Read: 2198 00:09:43.246 Data Units Written: 1878 00:09:43.246 Host Read Commands: 102595 00:09:43.247 Host Write Commands: 98365 00:09:43.247 Controller Busy Time: 0 minutes 00:09:43.247 Power Cycles: 0 00:09:43.247 Power On Hours: 0 hours 00:09:43.247 Unsafe Shutdowns: 0 00:09:43.247 Unrecoverable Media Errors: 0 00:09:43.247 Lifetime Error Log Entries: 0 00:09:43.247 Warning Temperature Time: 0 minutes 00:09:43.247 Critical Temperature Time: 0 minutes 00:09:43.247 00:09:43.247 Number of Queues 00:09:43.247 ================ 00:09:43.247 Number of I/O Submission Queues: 64 00:09:43.247 Number of I/O Completion Queues: 64 00:09:43.247 00:09:43.247 ZNS Specific Controller Data 00:09:43.247 ============================ 00:09:43.247 Zone Append Size Limit: 0 00:09:43.247 00:09:43.247 00:09:43.247 Active Namespaces 00:09:43.247 ================= 00:09:43.247 Namespace ID:1 00:09:43.247 Error Recovery Timeout: Unlimited 00:09:43.247 Command Set Identifier: NVM (00h) 00:09:43.247 Deallocate: Supported 00:09:43.247 Deallocated/Unwritten Error: Supported 00:09:43.247 Deallocated Read Value: All 0x00 00:09:43.247 Deallocate in Write Zeroes: Not Supported 00:09:43.247 Deallocated Guard Field: 0xFFFF 00:09:43.247 Flush: Supported 00:09:43.247 Reservation: Not Supported 00:09:43.247 Namespace Sharing Capabilities: Private 00:09:43.247 Size (in LBAs): 1048576 (4GiB) 00:09:43.247 Capacity (in LBAs): 1048576 (4GiB) 00:09:43.247 Utilization (in LBAs): 1048576 (4GiB) 00:09:43.247 Thin Provisioning: Not Supported 00:09:43.247 Per-NS Atomic Units: No 00:09:43.247 Maximum Single Source Range Length: 128 00:09:43.247 Maximum Copy Length: 128 00:09:43.247 Maximum Source Range Count: 128 00:09:43.247 NGUID/EUI64 Never Reused: No 00:09:43.247 Namespace Write Protected: No 00:09:43.247 Number of LBA Formats: 8 00:09:43.247 Current LBA Format: LBA Format #04 00:09:43.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.247 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.247 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.247 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.247 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.247 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.247 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.247 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.247 00:09:43.247 NVM Specific Namespace Data 00:09:43.247 =========================== 00:09:43.247 Logical Block Storage Tag Mask: 0 00:09:43.247 Protection Information Capabilities: 00:09:43.247 16b Guard Protection Information Storage Tag Support: No 00:09:43.247 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.247 Storage Tag Check Read Support: No 00:09:43.247 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Namespace ID:2 00:09:43.247 Error Recovery Timeout: Unlimited 00:09:43.247 Command Set Identifier: NVM (00h) 00:09:43.247 Deallocate: Supported 00:09:43.247 Deallocated/Unwritten Error: Supported 00:09:43.247 Deallocated Read Value: All 0x00 00:09:43.247 Deallocate in Write Zeroes: Not Supported 00:09:43.247 Deallocated Guard Field: 0xFFFF 00:09:43.247 Flush: Supported 00:09:43.247 Reservation: Not Supported 00:09:43.247 Namespace Sharing Capabilities: Private 00:09:43.247 Size (in LBAs): 1048576 (4GiB) 00:09:43.247 Capacity (in LBAs): 1048576 (4GiB) 00:09:43.247 Utilization (in LBAs): 1048576 (4GiB) 00:09:43.247 Thin Provisioning: Not Supported 00:09:43.247 Per-NS Atomic Units: No 00:09:43.247 Maximum Single Source Range Length: 128 00:09:43.247 Maximum Copy Length: 128 00:09:43.247 Maximum Source Range Count: 128 00:09:43.247 NGUID/EUI64 Never Reused: No 00:09:43.247 Namespace Write Protected: No 00:09:43.247 Number of LBA Formats: 8 00:09:43.247 Current LBA Format: LBA Format #04 00:09:43.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.247 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.247 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.247 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.247 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.247 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.247 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.247 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.247 00:09:43.247 NVM Specific Namespace Data 00:09:43.247 =========================== 00:09:43.247 Logical Block Storage Tag Mask: 0 00:09:43.247 Protection Information Capabilities: 00:09:43.247 16b Guard Protection Information Storage Tag Support: No 00:09:43.247 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.247 Storage Tag Check Read Support: No 00:09:43.247 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Namespace ID:3 00:09:43.247 Error Recovery Timeout: Unlimited 00:09:43.247 Command Set Identifier: NVM (00h) 00:09:43.247 Deallocate: Supported 00:09:43.247 Deallocated/Unwritten Error: Supported 00:09:43.247 Deallocated Read Value: All 0x00 00:09:43.247 Deallocate in Write Zeroes: Not Supported 00:09:43.247 Deallocated Guard Field: 0xFFFF 00:09:43.247 Flush: Supported 00:09:43.247 Reservation: Not Supported 00:09:43.247 Namespace Sharing Capabilities: Private 00:09:43.247 Size (in LBAs): 1048576 (4GiB) 00:09:43.247 Capacity (in LBAs): 1048576 (4GiB) 00:09:43.247 Utilization (in LBAs): 1048576 (4GiB) 00:09:43.247 Thin Provisioning: Not Supported 00:09:43.247 Per-NS Atomic Units: No 00:09:43.247 Maximum Single Source Range Length: 128 00:09:43.247 Maximum Copy Length: 128 00:09:43.247 Maximum Source Range Count: 128 00:09:43.247 NGUID/EUI64 Never Reused: No 00:09:43.247 Namespace Write Protected: No 00:09:43.247 Number of LBA Formats: 8 00:09:43.247 Current LBA Format: LBA Format #04 00:09:43.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.247 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.247 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.247 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.247 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.247 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.247 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.247 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.247 00:09:43.247 NVM Specific Namespace Data 00:09:43.247 =========================== 00:09:43.247 Logical Block Storage Tag Mask: 0 00:09:43.247 Protection Information Capabilities: 00:09:43.247 16b Guard Protection Information Storage Tag Support: No 00:09:43.247 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.247 Storage Tag Check Read Support: No 00:09:43.247 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.247 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:43.247 04:58:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:43.507 ===================================================== 00:09:43.507 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.507 ===================================================== 00:09:43.507 Controller Capabilities/Features 00:09:43.507 ================================ 00:09:43.507 Vendor ID: 1b36 00:09:43.507 Subsystem Vendor ID: 1af4 00:09:43.507 Serial Number: 12340 00:09:43.507 Model Number: QEMU NVMe Ctrl 00:09:43.507 Firmware Version: 8.0.0 00:09:43.507 Recommended Arb Burst: 6 00:09:43.507 IEEE OUI Identifier: 00 54 52 00:09:43.507 Multi-path I/O 00:09:43.507 May have multiple subsystem ports: No 00:09:43.507 May have multiple controllers: No 00:09:43.507 Associated with SR-IOV VF: No 00:09:43.507 Max Data Transfer Size: 524288 00:09:43.507 Max Number of Namespaces: 256 00:09:43.507 Max Number of I/O Queues: 64 00:09:43.507 NVMe Specification Version (VS): 1.4 00:09:43.507 NVMe Specification Version (Identify): 1.4 00:09:43.507 Maximum Queue Entries: 2048 00:09:43.507 Contiguous Queues Required: Yes 00:09:43.507 Arbitration Mechanisms Supported 00:09:43.507 Weighted Round Robin: Not Supported 00:09:43.508 Vendor Specific: Not Supported 00:09:43.508 Reset Timeout: 7500 ms 00:09:43.508 Doorbell Stride: 4 bytes 00:09:43.508 NVM Subsystem Reset: Not Supported 00:09:43.508 Command Sets Supported 00:09:43.508 NVM Command Set: Supported 00:09:43.508 Boot Partition: Not Supported 00:09:43.508 Memory Page Size Minimum: 4096 bytes 00:09:43.508 Memory Page Size Maximum: 65536 bytes 00:09:43.508 Persistent Memory Region: Not Supported 00:09:43.508 Optional Asynchronous Events Supported 00:09:43.508 Namespace Attribute Notices: Supported 00:09:43.508 Firmware Activation Notices: Not Supported 00:09:43.508 ANA Change Notices: Not Supported 00:09:43.508 PLE Aggregate Log Change Notices: Not Supported 00:09:43.508 LBA Status Info Alert Notices: Not Supported 00:09:43.508 EGE Aggregate Log Change Notices: Not Supported 00:09:43.508 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.508 Zone Descriptor Change Notices: Not Supported 00:09:43.508 Discovery Log Change Notices: Not Supported 00:09:43.508 Controller Attributes 00:09:43.508 128-bit Host Identifier: Not Supported 00:09:43.508 Non-Operational Permissive Mode: Not Supported 00:09:43.508 NVM Sets: Not Supported 00:09:43.508 Read Recovery Levels: Not Supported 00:09:43.508 Endurance Groups: Not Supported 00:09:43.508 Predictable Latency Mode: Not Supported 00:09:43.508 Traffic Based Keep ALive: Not Supported 00:09:43.508 Namespace Granularity: Not Supported 00:09:43.508 SQ Associations: Not Supported 00:09:43.508 UUID List: Not Supported 00:09:43.508 Multi-Domain Subsystem: Not Supported 00:09:43.508 Fixed Capacity Management: Not Supported 00:09:43.508 Variable Capacity Management: Not Supported 00:09:43.508 Delete Endurance Group: Not Supported 00:09:43.508 Delete NVM Set: Not Supported 00:09:43.508 Extended LBA Formats Supported: Supported 00:09:43.508 Flexible Data Placement Supported: Not Supported 00:09:43.508 00:09:43.508 Controller Memory Buffer Support 00:09:43.508 ================================ 00:09:43.508 Supported: No 00:09:43.508 00:09:43.508 Persistent Memory Region Support 00:09:43.508 ================================ 00:09:43.508 Supported: No 00:09:43.508 00:09:43.508 Admin Command Set Attributes 00:09:43.508 ============================ 00:09:43.508 Security Send/Receive: Not Supported 00:09:43.508 Format NVM: Supported 00:09:43.508 Firmware Activate/Download: Not Supported 00:09:43.508 Namespace Management: Supported 00:09:43.508 Device Self-Test: Not Supported 00:09:43.508 Directives: Supported 00:09:43.508 NVMe-MI: Not Supported 00:09:43.508 Virtualization Management: Not Supported 00:09:43.508 Doorbell Buffer Config: Supported 00:09:43.508 Get LBA Status Capability: Not Supported 00:09:43.508 Command & Feature Lockdown Capability: Not Supported 00:09:43.508 Abort Command Limit: 4 00:09:43.508 Async Event Request Limit: 4 00:09:43.508 Number of Firmware Slots: N/A 00:09:43.508 Firmware Slot 1 Read-Only: N/A 00:09:43.508 Firmware Activation Without Reset: N/A 00:09:43.508 Multiple Update Detection Support: N/A 00:09:43.508 Firmware Update Granularity: No Information Provided 00:09:43.508 Per-Namespace SMART Log: Yes 00:09:43.508 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.508 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:43.508 Command Effects Log Page: Supported 00:09:43.508 Get Log Page Extended Data: Supported 00:09:43.508 Telemetry Log Pages: Not Supported 00:09:43.508 Persistent Event Log Pages: Not Supported 00:09:43.508 Supported Log Pages Log Page: May Support 00:09:43.508 Commands Supported & Effects Log Page: Not Supported 00:09:43.508 Feature Identifiers & Effects Log Page:May Support 00:09:43.508 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.508 Data Area 4 for Telemetry Log: Not Supported 00:09:43.508 Error Log Page Entries Supported: 1 00:09:43.508 Keep Alive: Not Supported 00:09:43.508 00:09:43.508 NVM Command Set Attributes 00:09:43.508 ========================== 00:09:43.508 Submission Queue Entry Size 00:09:43.508 Max: 64 00:09:43.508 Min: 64 00:09:43.508 Completion Queue Entry Size 00:09:43.508 Max: 16 00:09:43.508 Min: 16 00:09:43.508 Number of Namespaces: 256 00:09:43.508 Compare Command: Supported 00:09:43.508 Write Uncorrectable Command: Not Supported 00:09:43.508 Dataset Management Command: Supported 00:09:43.508 Write Zeroes Command: Supported 00:09:43.508 Set Features Save Field: Supported 00:09:43.508 Reservations: Not Supported 00:09:43.508 Timestamp: Supported 00:09:43.508 Copy: Supported 00:09:43.508 Volatile Write Cache: Present 00:09:43.508 Atomic Write Unit (Normal): 1 00:09:43.508 Atomic Write Unit (PFail): 1 00:09:43.508 Atomic Compare & Write Unit: 1 00:09:43.508 Fused Compare & Write: Not Supported 00:09:43.508 Scatter-Gather List 00:09:43.508 SGL Command Set: Supported 00:09:43.508 SGL Keyed: Not Supported 00:09:43.508 SGL Bit Bucket Descriptor: Not Supported 00:09:43.508 SGL Metadata Pointer: Not Supported 00:09:43.508 Oversized SGL: Not Supported 00:09:43.508 SGL Metadata Address: Not Supported 00:09:43.508 SGL Offset: Not Supported 00:09:43.508 Transport SGL Data Block: Not Supported 00:09:43.508 Replay Protected Memory Block: Not Supported 00:09:43.508 00:09:43.508 Firmware Slot Information 00:09:43.508 ========================= 00:09:43.508 Active slot: 1 00:09:43.508 Slot 1 Firmware Revision: 1.0 00:09:43.508 00:09:43.508 00:09:43.508 Commands Supported and Effects 00:09:43.508 ============================== 00:09:43.508 Admin Commands 00:09:43.508 -------------- 00:09:43.508 Delete I/O Submission Queue (00h): Supported 00:09:43.508 Create I/O Submission Queue (01h): Supported 00:09:43.508 Get Log Page (02h): Supported 00:09:43.508 Delete I/O Completion Queue (04h): Supported 00:09:43.508 Create I/O Completion Queue (05h): Supported 00:09:43.508 Identify (06h): Supported 00:09:43.508 Abort (08h): Supported 00:09:43.508 Set Features (09h): Supported 00:09:43.508 Get Features (0Ah): Supported 00:09:43.508 Asynchronous Event Request (0Ch): Supported 00:09:43.508 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.508 Directive Send (19h): Supported 00:09:43.508 Directive Receive (1Ah): Supported 00:09:43.508 Virtualization Management (1Ch): Supported 00:09:43.508 Doorbell Buffer Config (7Ch): Supported 00:09:43.508 Format NVM (80h): Supported LBA-Change 00:09:43.508 I/O Commands 00:09:43.508 ------------ 00:09:43.508 Flush (00h): Supported LBA-Change 00:09:43.508 Write (01h): Supported LBA-Change 00:09:43.508 Read (02h): Supported 00:09:43.508 Compare (05h): Supported 00:09:43.508 Write Zeroes (08h): Supported LBA-Change 00:09:43.508 Dataset Management (09h): Supported LBA-Change 00:09:43.508 Unknown (0Ch): Supported 00:09:43.508 Unknown (12h): Supported 00:09:43.508 Copy (19h): Supported LBA-Change 00:09:43.508 Unknown (1Dh): Supported LBA-Change 00:09:43.508 00:09:43.508 Error Log 00:09:43.508 ========= 00:09:43.508 00:09:43.508 Arbitration 00:09:43.508 =========== 00:09:43.508 Arbitration Burst: no limit 00:09:43.508 00:09:43.508 Power Management 00:09:43.508 ================ 00:09:43.508 Number of Power States: 1 00:09:43.508 Current Power State: Power State #0 00:09:43.508 Power State #0: 00:09:43.508 Max Power: 25.00 W 00:09:43.508 Non-Operational State: Operational 00:09:43.508 Entry Latency: 16 microseconds 00:09:43.508 Exit Latency: 4 microseconds 00:09:43.508 Relative Read Throughput: 0 00:09:43.508 Relative Read Latency: 0 00:09:43.508 Relative Write Throughput: 0 00:09:43.508 Relative Write Latency: 0 00:09:43.508 Idle Power: Not Reported 00:09:43.508 Active Power: Not Reported 00:09:43.508 Non-Operational Permissive Mode: Not Supported 00:09:43.508 00:09:43.508 Health Information 00:09:43.508 ================== 00:09:43.508 Critical Warnings: 00:09:43.508 Available Spare Space: OK 00:09:43.508 Temperature: OK 00:09:43.508 Device Reliability: OK 00:09:43.508 Read Only: No 00:09:43.508 Volatile Memory Backup: OK 00:09:43.508 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.508 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.508 Available Spare: 0% 00:09:43.508 Available Spare Threshold: 0% 00:09:43.508 Life Percentage Used: 0% 00:09:43.508 Data Units Read: 700 00:09:43.508 Data Units Written: 591 00:09:43.508 Host Read Commands: 33788 00:09:43.508 Host Write Commands: 32826 00:09:43.508 Controller Busy Time: 0 minutes 00:09:43.508 Power Cycles: 0 00:09:43.508 Power On Hours: 0 hours 00:09:43.508 Unsafe Shutdowns: 0 00:09:43.508 Unrecoverable Media Errors: 0 00:09:43.508 Lifetime Error Log Entries: 0 00:09:43.508 Warning Temperature Time: 0 minutes 00:09:43.508 Critical Temperature Time: 0 minutes 00:09:43.508 00:09:43.509 Number of Queues 00:09:43.509 ================ 00:09:43.509 Number of I/O Submission Queues: 64 00:09:43.509 Number of I/O Completion Queues: 64 00:09:43.509 00:09:43.509 ZNS Specific Controller Data 00:09:43.509 ============================ 00:09:43.509 Zone Append Size Limit: 0 00:09:43.509 00:09:43.509 00:09:43.509 Active Namespaces 00:09:43.509 ================= 00:09:43.509 Namespace ID:1 00:09:43.509 Error Recovery Timeout: Unlimited 00:09:43.509 Command Set Identifier: NVM (00h) 00:09:43.509 Deallocate: Supported 00:09:43.509 Deallocated/Unwritten Error: Supported 00:09:43.509 Deallocated Read Value: All 0x00 00:09:43.509 Deallocate in Write Zeroes: Not Supported 00:09:43.509 Deallocated Guard Field: 0xFFFF 00:09:43.509 Flush: Supported 00:09:43.509 Reservation: Not Supported 00:09:43.509 Metadata Transferred as: Separate Metadata Buffer 00:09:43.509 Namespace Sharing Capabilities: Private 00:09:43.509 Size (in LBAs): 1548666 (5GiB) 00:09:43.509 Capacity (in LBAs): 1548666 (5GiB) 00:09:43.509 Utilization (in LBAs): 1548666 (5GiB) 00:09:43.509 Thin Provisioning: Not Supported 00:09:43.509 Per-NS Atomic Units: No 00:09:43.509 Maximum Single Source Range Length: 128 00:09:43.509 Maximum Copy Length: 128 00:09:43.509 Maximum Source Range Count: 128 00:09:43.509 NGUID/EUI64 Never Reused: No 00:09:43.509 Namespace Write Protected: No 00:09:43.509 Number of LBA Formats: 8 00:09:43.509 Current LBA Format: LBA Format #07 00:09:43.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.509 00:09:43.509 NVM Specific Namespace Data 00:09:43.509 =========================== 00:09:43.509 Logical Block Storage Tag Mask: 0 00:09:43.509 Protection Information Capabilities: 00:09:43.509 16b Guard Protection Information Storage Tag Support: No 00:09:43.509 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.509 Storage Tag Check Read Support: No 00:09:43.509 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.509 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:43.509 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:43.768 ===================================================== 00:09:43.768 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.768 ===================================================== 00:09:43.768 Controller Capabilities/Features 00:09:43.768 ================================ 00:09:43.768 Vendor ID: 1b36 00:09:43.768 Subsystem Vendor ID: 1af4 00:09:43.768 Serial Number: 12341 00:09:43.768 Model Number: QEMU NVMe Ctrl 00:09:43.768 Firmware Version: 8.0.0 00:09:43.768 Recommended Arb Burst: 6 00:09:43.768 IEEE OUI Identifier: 00 54 52 00:09:43.768 Multi-path I/O 00:09:43.768 May have multiple subsystem ports: No 00:09:43.768 May have multiple controllers: No 00:09:43.768 Associated with SR-IOV VF: No 00:09:43.768 Max Data Transfer Size: 524288 00:09:43.768 Max Number of Namespaces: 256 00:09:43.768 Max Number of I/O Queues: 64 00:09:43.768 NVMe Specification Version (VS): 1.4 00:09:43.768 NVMe Specification Version (Identify): 1.4 00:09:43.768 Maximum Queue Entries: 2048 00:09:43.768 Contiguous Queues Required: Yes 00:09:43.768 Arbitration Mechanisms Supported 00:09:43.768 Weighted Round Robin: Not Supported 00:09:43.768 Vendor Specific: Not Supported 00:09:43.768 Reset Timeout: 7500 ms 00:09:43.768 Doorbell Stride: 4 bytes 00:09:43.768 NVM Subsystem Reset: Not Supported 00:09:43.768 Command Sets Supported 00:09:43.768 NVM Command Set: Supported 00:09:43.768 Boot Partition: Not Supported 00:09:43.768 Memory Page Size Minimum: 4096 bytes 00:09:43.768 Memory Page Size Maximum: 65536 bytes 00:09:43.768 Persistent Memory Region: Not Supported 00:09:43.768 Optional Asynchronous Events Supported 00:09:43.768 Namespace Attribute Notices: Supported 00:09:43.768 Firmware Activation Notices: Not Supported 00:09:43.768 ANA Change Notices: Not Supported 00:09:43.768 PLE Aggregate Log Change Notices: Not Supported 00:09:43.768 LBA Status Info Alert Notices: Not Supported 00:09:43.768 EGE Aggregate Log Change Notices: Not Supported 00:09:43.768 Normal NVM Subsystem Shutdown event: Not Supported 00:09:43.768 Zone Descriptor Change Notices: Not Supported 00:09:43.768 Discovery Log Change Notices: Not Supported 00:09:43.768 Controller Attributes 00:09:43.768 128-bit Host Identifier: Not Supported 00:09:43.768 Non-Operational Permissive Mode: Not Supported 00:09:43.768 NVM Sets: Not Supported 00:09:43.768 Read Recovery Levels: Not Supported 00:09:43.769 Endurance Groups: Not Supported 00:09:43.769 Predictable Latency Mode: Not Supported 00:09:43.769 Traffic Based Keep ALive: Not Supported 00:09:43.769 Namespace Granularity: Not Supported 00:09:43.769 SQ Associations: Not Supported 00:09:43.769 UUID List: Not Supported 00:09:43.769 Multi-Domain Subsystem: Not Supported 00:09:43.769 Fixed Capacity Management: Not Supported 00:09:43.769 Variable Capacity Management: Not Supported 00:09:43.769 Delete Endurance Group: Not Supported 00:09:43.769 Delete NVM Set: Not Supported 00:09:43.769 Extended LBA Formats Supported: Supported 00:09:43.769 Flexible Data Placement Supported: Not Supported 00:09:43.769 00:09:43.769 Controller Memory Buffer Support 00:09:43.769 ================================ 00:09:43.769 Supported: No 00:09:43.769 00:09:43.769 Persistent Memory Region Support 00:09:43.769 ================================ 00:09:43.769 Supported: No 00:09:43.769 00:09:43.769 Admin Command Set Attributes 00:09:43.769 ============================ 00:09:43.769 Security Send/Receive: Not Supported 00:09:43.769 Format NVM: Supported 00:09:43.769 Firmware Activate/Download: Not Supported 00:09:43.769 Namespace Management: Supported 00:09:43.769 Device Self-Test: Not Supported 00:09:43.769 Directives: Supported 00:09:43.769 NVMe-MI: Not Supported 00:09:43.769 Virtualization Management: Not Supported 00:09:43.769 Doorbell Buffer Config: Supported 00:09:43.769 Get LBA Status Capability: Not Supported 00:09:43.769 Command & Feature Lockdown Capability: Not Supported 00:09:43.769 Abort Command Limit: 4 00:09:43.769 Async Event Request Limit: 4 00:09:43.769 Number of Firmware Slots: N/A 00:09:43.769 Firmware Slot 1 Read-Only: N/A 00:09:43.769 Firmware Activation Without Reset: N/A 00:09:43.769 Multiple Update Detection Support: N/A 00:09:43.769 Firmware Update Granularity: No Information Provided 00:09:43.769 Per-Namespace SMART Log: Yes 00:09:43.769 Asymmetric Namespace Access Log Page: Not Supported 00:09:43.769 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:43.769 Command Effects Log Page: Supported 00:09:43.769 Get Log Page Extended Data: Supported 00:09:43.769 Telemetry Log Pages: Not Supported 00:09:43.769 Persistent Event Log Pages: Not Supported 00:09:43.769 Supported Log Pages Log Page: May Support 00:09:43.769 Commands Supported & Effects Log Page: Not Supported 00:09:43.769 Feature Identifiers & Effects Log Page:May Support 00:09:43.769 NVMe-MI Commands & Effects Log Page: May Support 00:09:43.769 Data Area 4 for Telemetry Log: Not Supported 00:09:43.769 Error Log Page Entries Supported: 1 00:09:43.769 Keep Alive: Not Supported 00:09:43.769 00:09:43.769 NVM Command Set Attributes 00:09:43.769 ========================== 00:09:43.769 Submission Queue Entry Size 00:09:43.769 Max: 64 00:09:43.769 Min: 64 00:09:43.769 Completion Queue Entry Size 00:09:43.769 Max: 16 00:09:43.769 Min: 16 00:09:43.769 Number of Namespaces: 256 00:09:43.769 Compare Command: Supported 00:09:43.769 Write Uncorrectable Command: Not Supported 00:09:43.769 Dataset Management Command: Supported 00:09:43.769 Write Zeroes Command: Supported 00:09:43.769 Set Features Save Field: Supported 00:09:43.769 Reservations: Not Supported 00:09:43.769 Timestamp: Supported 00:09:43.769 Copy: Supported 00:09:43.769 Volatile Write Cache: Present 00:09:43.769 Atomic Write Unit (Normal): 1 00:09:43.769 Atomic Write Unit (PFail): 1 00:09:43.769 Atomic Compare & Write Unit: 1 00:09:43.769 Fused Compare & Write: Not Supported 00:09:43.769 Scatter-Gather List 00:09:43.769 SGL Command Set: Supported 00:09:43.769 SGL Keyed: Not Supported 00:09:43.769 SGL Bit Bucket Descriptor: Not Supported 00:09:43.769 SGL Metadata Pointer: Not Supported 00:09:43.769 Oversized SGL: Not Supported 00:09:43.769 SGL Metadata Address: Not Supported 00:09:43.769 SGL Offset: Not Supported 00:09:43.769 Transport SGL Data Block: Not Supported 00:09:43.769 Replay Protected Memory Block: Not Supported 00:09:43.769 00:09:43.769 Firmware Slot Information 00:09:43.769 ========================= 00:09:43.769 Active slot: 1 00:09:43.769 Slot 1 Firmware Revision: 1.0 00:09:43.769 00:09:43.769 00:09:43.769 Commands Supported and Effects 00:09:43.769 ============================== 00:09:43.769 Admin Commands 00:09:43.769 -------------- 00:09:43.769 Delete I/O Submission Queue (00h): Supported 00:09:43.769 Create I/O Submission Queue (01h): Supported 00:09:43.769 Get Log Page (02h): Supported 00:09:43.769 Delete I/O Completion Queue (04h): Supported 00:09:43.769 Create I/O Completion Queue (05h): Supported 00:09:43.769 Identify (06h): Supported 00:09:43.769 Abort (08h): Supported 00:09:43.769 Set Features (09h): Supported 00:09:43.769 Get Features (0Ah): Supported 00:09:43.769 Asynchronous Event Request (0Ch): Supported 00:09:43.769 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:43.769 Directive Send (19h): Supported 00:09:43.769 Directive Receive (1Ah): Supported 00:09:43.769 Virtualization Management (1Ch): Supported 00:09:43.769 Doorbell Buffer Config (7Ch): Supported 00:09:43.769 Format NVM (80h): Supported LBA-Change 00:09:43.769 I/O Commands 00:09:43.769 ------------ 00:09:43.769 Flush (00h): Supported LBA-Change 00:09:43.769 Write (01h): Supported LBA-Change 00:09:43.769 Read (02h): Supported 00:09:43.769 Compare (05h): Supported 00:09:43.769 Write Zeroes (08h): Supported LBA-Change 00:09:43.769 Dataset Management (09h): Supported LBA-Change 00:09:43.769 Unknown (0Ch): Supported 00:09:43.769 Unknown (12h): Supported 00:09:43.769 Copy (19h): Supported LBA-Change 00:09:43.769 Unknown (1Dh): Supported LBA-Change 00:09:43.769 00:09:43.769 Error Log 00:09:43.769 ========= 00:09:43.769 00:09:43.769 Arbitration 00:09:43.769 =========== 00:09:43.769 Arbitration Burst: no limit 00:09:43.769 00:09:43.769 Power Management 00:09:43.769 ================ 00:09:43.769 Number of Power States: 1 00:09:43.769 Current Power State: Power State #0 00:09:43.769 Power State #0: 00:09:43.769 Max Power: 25.00 W 00:09:43.769 Non-Operational State: Operational 00:09:43.769 Entry Latency: 16 microseconds 00:09:43.769 Exit Latency: 4 microseconds 00:09:43.769 Relative Read Throughput: 0 00:09:43.769 Relative Read Latency: 0 00:09:43.769 Relative Write Throughput: 0 00:09:43.769 Relative Write Latency: 0 00:09:43.769 Idle Power: Not Reported 00:09:43.769 Active Power: Not Reported 00:09:43.769 Non-Operational Permissive Mode: Not Supported 00:09:43.769 00:09:43.769 Health Information 00:09:43.769 ================== 00:09:43.769 Critical Warnings: 00:09:43.769 Available Spare Space: OK 00:09:43.769 Temperature: OK 00:09:43.769 Device Reliability: OK 00:09:43.769 Read Only: No 00:09:43.769 Volatile Memory Backup: OK 00:09:43.769 Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.769 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:43.769 Available Spare: 0% 00:09:43.769 Available Spare Threshold: 0% 00:09:43.769 Life Percentage Used: 0% 00:09:43.769 Data Units Read: 1112 00:09:43.769 Data Units Written: 890 00:09:43.769 Host Read Commands: 50967 00:09:43.769 Host Write Commands: 47937 00:09:43.769 Controller Busy Time: 0 minutes 00:09:43.769 Power Cycles: 0 00:09:43.769 Power On Hours: 0 hours 00:09:43.769 Unsafe Shutdowns: 0 00:09:43.770 Unrecoverable Media Errors: 0 00:09:43.770 Lifetime Error Log Entries: 0 00:09:43.770 Warning Temperature Time: 0 minutes 00:09:43.770 Critical Temperature Time: 0 minutes 00:09:43.770 00:09:43.770 Number of Queues 00:09:43.770 ================ 00:09:43.770 Number of I/O Submission Queues: 64 00:09:43.770 Number of I/O Completion Queues: 64 00:09:43.770 00:09:43.770 ZNS Specific Controller Data 00:09:43.770 ============================ 00:09:43.770 Zone Append Size Limit: 0 00:09:43.770 00:09:43.770 00:09:43.770 Active Namespaces 00:09:43.770 ================= 00:09:43.770 Namespace ID:1 00:09:43.770 Error Recovery Timeout: Unlimited 00:09:43.770 Command Set Identifier: NVM (00h) 00:09:43.770 Deallocate: Supported 00:09:43.770 Deallocated/Unwritten Error: Supported 00:09:43.770 Deallocated Read Value: All 0x00 00:09:43.770 Deallocate in Write Zeroes: Not Supported 00:09:43.770 Deallocated Guard Field: 0xFFFF 00:09:43.770 Flush: Supported 00:09:43.770 Reservation: Not Supported 00:09:43.770 Namespace Sharing Capabilities: Private 00:09:43.770 Size (in LBAs): 1310720 (5GiB) 00:09:43.770 Capacity (in LBAs): 1310720 (5GiB) 00:09:43.770 Utilization (in LBAs): 1310720 (5GiB) 00:09:43.770 Thin Provisioning: Not Supported 00:09:43.770 Per-NS Atomic Units: No 00:09:43.770 Maximum Single Source Range Length: 128 00:09:43.770 Maximum Copy Length: 128 00:09:43.770 Maximum Source Range Count: 128 00:09:43.770 NGUID/EUI64 Never Reused: No 00:09:43.770 Namespace Write Protected: No 00:09:43.770 Number of LBA Formats: 8 00:09:43.770 Current LBA Format: LBA Format #04 00:09:43.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:43.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:43.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:43.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:43.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:43.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:43.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:43.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:43.770 00:09:43.770 NVM Specific Namespace Data 00:09:43.770 =========================== 00:09:43.770 Logical Block Storage Tag Mask: 0 00:09:43.770 Protection Information Capabilities: 00:09:43.770 16b Guard Protection Information Storage Tag Support: No 00:09:43.770 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:43.770 Storage Tag Check Read Support: No 00:09:43.770 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:43.770 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:43.770 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:44.030 ===================================================== 00:09:44.030 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.030 ===================================================== 00:09:44.030 Controller Capabilities/Features 00:09:44.030 ================================ 00:09:44.030 Vendor ID: 1b36 00:09:44.030 Subsystem Vendor ID: 1af4 00:09:44.030 Serial Number: 12342 00:09:44.030 Model Number: QEMU NVMe Ctrl 00:09:44.030 Firmware Version: 8.0.0 00:09:44.030 Recommended Arb Burst: 6 00:09:44.030 IEEE OUI Identifier: 00 54 52 00:09:44.030 Multi-path I/O 00:09:44.030 May have multiple subsystem ports: No 00:09:44.030 May have multiple controllers: No 00:09:44.030 Associated with SR-IOV VF: No 00:09:44.030 Max Data Transfer Size: 524288 00:09:44.030 Max Number of Namespaces: 256 00:09:44.030 Max Number of I/O Queues: 64 00:09:44.030 NVMe Specification Version (VS): 1.4 00:09:44.030 NVMe Specification Version (Identify): 1.4 00:09:44.030 Maximum Queue Entries: 2048 00:09:44.030 Contiguous Queues Required: Yes 00:09:44.030 Arbitration Mechanisms Supported 00:09:44.030 Weighted Round Robin: Not Supported 00:09:44.030 Vendor Specific: Not Supported 00:09:44.030 Reset Timeout: 7500 ms 00:09:44.030 Doorbell Stride: 4 bytes 00:09:44.030 NVM Subsystem Reset: Not Supported 00:09:44.030 Command Sets Supported 00:09:44.030 NVM Command Set: Supported 00:09:44.030 Boot Partition: Not Supported 00:09:44.030 Memory Page Size Minimum: 4096 bytes 00:09:44.030 Memory Page Size Maximum: 65536 bytes 00:09:44.030 Persistent Memory Region: Not Supported 00:09:44.030 Optional Asynchronous Events Supported 00:09:44.030 Namespace Attribute Notices: Supported 00:09:44.030 Firmware Activation Notices: Not Supported 00:09:44.030 ANA Change Notices: Not Supported 00:09:44.030 PLE Aggregate Log Change Notices: Not Supported 00:09:44.030 LBA Status Info Alert Notices: Not Supported 00:09:44.030 EGE Aggregate Log Change Notices: Not Supported 00:09:44.030 Normal NVM Subsystem Shutdown event: Not Supported 00:09:44.030 Zone Descriptor Change Notices: Not Supported 00:09:44.030 Discovery Log Change Notices: Not Supported 00:09:44.030 Controller Attributes 00:09:44.030 128-bit Host Identifier: Not Supported 00:09:44.030 Non-Operational Permissive Mode: Not Supported 00:09:44.030 NVM Sets: Not Supported 00:09:44.030 Read Recovery Levels: Not Supported 00:09:44.030 Endurance Groups: Not Supported 00:09:44.030 Predictable Latency Mode: Not Supported 00:09:44.030 Traffic Based Keep ALive: Not Supported 00:09:44.030 Namespace Granularity: Not Supported 00:09:44.030 SQ Associations: Not Supported 00:09:44.030 UUID List: Not Supported 00:09:44.030 Multi-Domain Subsystem: Not Supported 00:09:44.030 Fixed Capacity Management: Not Supported 00:09:44.030 Variable Capacity Management: Not Supported 00:09:44.030 Delete Endurance Group: Not Supported 00:09:44.030 Delete NVM Set: Not Supported 00:09:44.030 Extended LBA Formats Supported: Supported 00:09:44.030 Flexible Data Placement Supported: Not Supported 00:09:44.030 00:09:44.030 Controller Memory Buffer Support 00:09:44.030 ================================ 00:09:44.030 Supported: No 00:09:44.030 00:09:44.030 Persistent Memory Region Support 00:09:44.030 ================================ 00:09:44.030 Supported: No 00:09:44.030 00:09:44.030 Admin Command Set Attributes 00:09:44.030 ============================ 00:09:44.030 Security Send/Receive: Not Supported 00:09:44.030 Format NVM: Supported 00:09:44.030 Firmware Activate/Download: Not Supported 00:09:44.030 Namespace Management: Supported 00:09:44.030 Device Self-Test: Not Supported 00:09:44.030 Directives: Supported 00:09:44.030 NVMe-MI: Not Supported 00:09:44.030 Virtualization Management: Not Supported 00:09:44.030 Doorbell Buffer Config: Supported 00:09:44.030 Get LBA Status Capability: Not Supported 00:09:44.030 Command & Feature Lockdown Capability: Not Supported 00:09:44.030 Abort Command Limit: 4 00:09:44.030 Async Event Request Limit: 4 00:09:44.030 Number of Firmware Slots: N/A 00:09:44.030 Firmware Slot 1 Read-Only: N/A 00:09:44.030 Firmware Activation Without Reset: N/A 00:09:44.030 Multiple Update Detection Support: N/A 00:09:44.030 Firmware Update Granularity: No Information Provided 00:09:44.030 Per-Namespace SMART Log: Yes 00:09:44.030 Asymmetric Namespace Access Log Page: Not Supported 00:09:44.030 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:44.030 Command Effects Log Page: Supported 00:09:44.030 Get Log Page Extended Data: Supported 00:09:44.030 Telemetry Log Pages: Not Supported 00:09:44.030 Persistent Event Log Pages: Not Supported 00:09:44.030 Supported Log Pages Log Page: May Support 00:09:44.030 Commands Supported & Effects Log Page: Not Supported 00:09:44.030 Feature Identifiers & Effects Log Page:May Support 00:09:44.030 NVMe-MI Commands & Effects Log Page: May Support 00:09:44.030 Data Area 4 for Telemetry Log: Not Supported 00:09:44.030 Error Log Page Entries Supported: 1 00:09:44.030 Keep Alive: Not Supported 00:09:44.030 00:09:44.030 NVM Command Set Attributes 00:09:44.030 ========================== 00:09:44.030 Submission Queue Entry Size 00:09:44.030 Max: 64 00:09:44.030 Min: 64 00:09:44.030 Completion Queue Entry Size 00:09:44.030 Max: 16 00:09:44.030 Min: 16 00:09:44.030 Number of Namespaces: 256 00:09:44.030 Compare Command: Supported 00:09:44.030 Write Uncorrectable Command: Not Supported 00:09:44.030 Dataset Management Command: Supported 00:09:44.030 Write Zeroes Command: Supported 00:09:44.030 Set Features Save Field: Supported 00:09:44.030 Reservations: Not Supported 00:09:44.030 Timestamp: Supported 00:09:44.030 Copy: Supported 00:09:44.030 Volatile Write Cache: Present 00:09:44.030 Atomic Write Unit (Normal): 1 00:09:44.030 Atomic Write Unit (PFail): 1 00:09:44.030 Atomic Compare & Write Unit: 1 00:09:44.030 Fused Compare & Write: Not Supported 00:09:44.030 Scatter-Gather List 00:09:44.030 SGL Command Set: Supported 00:09:44.030 SGL Keyed: Not Supported 00:09:44.030 SGL Bit Bucket Descriptor: Not Supported 00:09:44.030 SGL Metadata Pointer: Not Supported 00:09:44.030 Oversized SGL: Not Supported 00:09:44.030 SGL Metadata Address: Not Supported 00:09:44.030 SGL Offset: Not Supported 00:09:44.030 Transport SGL Data Block: Not Supported 00:09:44.030 Replay Protected Memory Block: Not Supported 00:09:44.030 00:09:44.030 Firmware Slot Information 00:09:44.030 ========================= 00:09:44.030 Active slot: 1 00:09:44.030 Slot 1 Firmware Revision: 1.0 00:09:44.030 00:09:44.030 00:09:44.030 Commands Supported and Effects 00:09:44.030 ============================== 00:09:44.030 Admin Commands 00:09:44.030 -------------- 00:09:44.030 Delete I/O Submission Queue (00h): Supported 00:09:44.030 Create I/O Submission Queue (01h): Supported 00:09:44.030 Get Log Page (02h): Supported 00:09:44.030 Delete I/O Completion Queue (04h): Supported 00:09:44.030 Create I/O Completion Queue (05h): Supported 00:09:44.030 Identify (06h): Supported 00:09:44.030 Abort (08h): Supported 00:09:44.030 Set Features (09h): Supported 00:09:44.030 Get Features (0Ah): Supported 00:09:44.030 Asynchronous Event Request (0Ch): Supported 00:09:44.030 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:44.030 Directive Send (19h): Supported 00:09:44.030 Directive Receive (1Ah): Supported 00:09:44.030 Virtualization Management (1Ch): Supported 00:09:44.030 Doorbell Buffer Config (7Ch): Supported 00:09:44.030 Format NVM (80h): Supported LBA-Change 00:09:44.030 I/O Commands 00:09:44.030 ------------ 00:09:44.030 Flush (00h): Supported LBA-Change 00:09:44.031 Write (01h): Supported LBA-Change 00:09:44.031 Read (02h): Supported 00:09:44.031 Compare (05h): Supported 00:09:44.031 Write Zeroes (08h): Supported LBA-Change 00:09:44.031 Dataset Management (09h): Supported LBA-Change 00:09:44.031 Unknown (0Ch): Supported 00:09:44.031 Unknown (12h): Supported 00:09:44.031 Copy (19h): Supported LBA-Change 00:09:44.031 Unknown (1Dh): Supported LBA-Change 00:09:44.031 00:09:44.031 Error Log 00:09:44.031 ========= 00:09:44.031 00:09:44.031 Arbitration 00:09:44.031 =========== 00:09:44.031 Arbitration Burst: no limit 00:09:44.031 00:09:44.031 Power Management 00:09:44.031 ================ 00:09:44.031 Number of Power States: 1 00:09:44.031 Current Power State: Power State #0 00:09:44.031 Power State #0: 00:09:44.031 Max Power: 25.00 W 00:09:44.031 Non-Operational State: Operational 00:09:44.031 Entry Latency: 16 microseconds 00:09:44.031 Exit Latency: 4 microseconds 00:09:44.031 Relative Read Throughput: 0 00:09:44.031 Relative Read Latency: 0 00:09:44.031 Relative Write Throughput: 0 00:09:44.031 Relative Write Latency: 0 00:09:44.031 Idle Power: Not Reported 00:09:44.031 Active Power: Not Reported 00:09:44.031 Non-Operational Permissive Mode: Not Supported 00:09:44.031 00:09:44.031 Health Information 00:09:44.031 ================== 00:09:44.031 Critical Warnings: 00:09:44.031 Available Spare Space: OK 00:09:44.031 Temperature: OK 00:09:44.031 Device Reliability: OK 00:09:44.031 Read Only: No 00:09:44.031 Volatile Memory Backup: OK 00:09:44.031 Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.031 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:44.031 Available Spare: 0% 00:09:44.031 Available Spare Threshold: 0% 00:09:44.031 Life Percentage Used: 0% 00:09:44.031 Data Units Read: 2198 00:09:44.031 Data Units Written: 1878 00:09:44.031 Host Read Commands: 102595 00:09:44.031 Host Write Commands: 98365 00:09:44.031 Controller Busy Time: 0 minutes 00:09:44.031 Power Cycles: 0 00:09:44.031 Power On Hours: 0 hours 00:09:44.031 Unsafe Shutdowns: 0 00:09:44.031 Unrecoverable Media Errors: 0 00:09:44.031 Lifetime Error Log Entries: 0 00:09:44.031 Warning Temperature Time: 0 minutes 00:09:44.031 Critical Temperature Time: 0 minutes 00:09:44.031 00:09:44.031 Number of Queues 00:09:44.031 ================ 00:09:44.031 Number of I/O Submission Queues: 64 00:09:44.031 Number of I/O Completion Queues: 64 00:09:44.031 00:09:44.031 ZNS Specific Controller Data 00:09:44.031 ============================ 00:09:44.031 Zone Append Size Limit: 0 00:09:44.031 00:09:44.031 00:09:44.031 Active Namespaces 00:09:44.031 ================= 00:09:44.031 Namespace ID:1 00:09:44.031 Error Recovery Timeout: Unlimited 00:09:44.031 Command Set Identifier: NVM (00h) 00:09:44.031 Deallocate: Supported 00:09:44.031 Deallocated/Unwritten Error: Supported 00:09:44.031 Deallocated Read Value: All 0x00 00:09:44.031 Deallocate in Write Zeroes: Not Supported 00:09:44.031 Deallocated Guard Field: 0xFFFF 00:09:44.031 Flush: Supported 00:09:44.031 Reservation: Not Supported 00:09:44.031 Namespace Sharing Capabilities: Private 00:09:44.031 Size (in LBAs): 1048576 (4GiB) 00:09:44.031 Capacity (in LBAs): 1048576 (4GiB) 00:09:44.031 Utilization (in LBAs): 1048576 (4GiB) 00:09:44.031 Thin Provisioning: Not Supported 00:09:44.031 Per-NS Atomic Units: No 00:09:44.031 Maximum Single Source Range Length: 128 00:09:44.031 Maximum Copy Length: 128 00:09:44.031 Maximum Source Range Count: 128 00:09:44.031 NGUID/EUI64 Never Reused: No 00:09:44.031 Namespace Write Protected: No 00:09:44.031 Number of LBA Formats: 8 00:09:44.031 Current LBA Format: LBA Format #04 00:09:44.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:44.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:44.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:44.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:44.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:44.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:44.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:44.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:44.031 00:09:44.031 NVM Specific Namespace Data 00:09:44.031 =========================== 00:09:44.031 Logical Block Storage Tag Mask: 0 00:09:44.031 Protection Information Capabilities: 00:09:44.031 16b Guard Protection Information Storage Tag Support: No 00:09:44.031 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:44.031 Storage Tag Check Read Support: No 00:09:44.031 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Namespace ID:2 00:09:44.031 Error Recovery Timeout: Unlimited 00:09:44.031 Command Set Identifier: NVM (00h) 00:09:44.031 Deallocate: Supported 00:09:44.031 Deallocated/Unwritten Error: Supported 00:09:44.031 Deallocated Read Value: All 0x00 00:09:44.031 Deallocate in Write Zeroes: Not Supported 00:09:44.031 Deallocated Guard Field: 0xFFFF 00:09:44.031 Flush: Supported 00:09:44.031 Reservation: Not Supported 00:09:44.031 Namespace Sharing Capabilities: Private 00:09:44.031 Size (in LBAs): 1048576 (4GiB) 00:09:44.031 Capacity (in LBAs): 1048576 (4GiB) 00:09:44.031 Utilization (in LBAs): 1048576 (4GiB) 00:09:44.031 Thin Provisioning: Not Supported 00:09:44.031 Per-NS Atomic Units: No 00:09:44.031 Maximum Single Source Range Length: 128 00:09:44.031 Maximum Copy Length: 128 00:09:44.031 Maximum Source Range Count: 128 00:09:44.031 NGUID/EUI64 Never Reused: No 00:09:44.031 Namespace Write Protected: No 00:09:44.031 Number of LBA Formats: 8 00:09:44.031 Current LBA Format: LBA Format #04 00:09:44.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:44.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:44.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:44.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:44.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:44.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:44.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:44.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:44.031 00:09:44.031 NVM Specific Namespace Data 00:09:44.031 =========================== 00:09:44.031 Logical Block Storage Tag Mask: 0 00:09:44.031 Protection Information Capabilities: 00:09:44.031 16b Guard Protection Information Storage Tag Support: No 00:09:44.031 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:44.031 Storage Tag Check Read Support: No 00:09:44.031 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.031 Namespace ID:3 00:09:44.031 Error Recovery Timeout: Unlimited 00:09:44.031 Command Set Identifier: NVM (00h) 00:09:44.031 Deallocate: Supported 00:09:44.031 Deallocated/Unwritten Error: Supported 00:09:44.031 Deallocated Read Value: All 0x00 00:09:44.031 Deallocate in Write Zeroes: Not Supported 00:09:44.031 Deallocated Guard Field: 0xFFFF 00:09:44.031 Flush: Supported 00:09:44.031 Reservation: Not Supported 00:09:44.031 Namespace Sharing Capabilities: Private 00:09:44.031 Size (in LBAs): 1048576 (4GiB) 00:09:44.031 Capacity (in LBAs): 1048576 (4GiB) 00:09:44.031 Utilization (in LBAs): 1048576 (4GiB) 00:09:44.031 Thin Provisioning: Not Supported 00:09:44.031 Per-NS Atomic Units: No 00:09:44.031 Maximum Single Source Range Length: 128 00:09:44.031 Maximum Copy Length: 128 00:09:44.031 Maximum Source Range Count: 128 00:09:44.031 NGUID/EUI64 Never Reused: No 00:09:44.031 Namespace Write Protected: No 00:09:44.031 Number of LBA Formats: 8 00:09:44.031 Current LBA Format: LBA Format #04 00:09:44.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:44.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:44.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:44.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:44.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:44.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:44.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:44.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:44.031 00:09:44.031 NVM Specific Namespace Data 00:09:44.031 =========================== 00:09:44.031 Logical Block Storage Tag Mask: 0 00:09:44.031 Protection Information Capabilities: 00:09:44.031 16b Guard Protection Information Storage Tag Support: No 00:09:44.031 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:44.291 Storage Tag Check Read Support: No 00:09:44.291 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.291 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:44.291 04:58:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:44.550 ===================================================== 00:09:44.550 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.550 ===================================================== 00:09:44.550 Controller Capabilities/Features 00:09:44.550 ================================ 00:09:44.550 Vendor ID: 1b36 00:09:44.550 Subsystem Vendor ID: 1af4 00:09:44.550 Serial Number: 12343 00:09:44.550 Model Number: QEMU NVMe Ctrl 00:09:44.550 Firmware Version: 8.0.0 00:09:44.550 Recommended Arb Burst: 6 00:09:44.550 IEEE OUI Identifier: 00 54 52 00:09:44.550 Multi-path I/O 00:09:44.550 May have multiple subsystem ports: No 00:09:44.550 May have multiple controllers: Yes 00:09:44.550 Associated with SR-IOV VF: No 00:09:44.550 Max Data Transfer Size: 524288 00:09:44.550 Max Number of Namespaces: 256 00:09:44.550 Max Number of I/O Queues: 64 00:09:44.550 NVMe Specification Version (VS): 1.4 00:09:44.550 NVMe Specification Version (Identify): 1.4 00:09:44.550 Maximum Queue Entries: 2048 00:09:44.550 Contiguous Queues Required: Yes 00:09:44.550 Arbitration Mechanisms Supported 00:09:44.550 Weighted Round Robin: Not Supported 00:09:44.550 Vendor Specific: Not Supported 00:09:44.550 Reset Timeout: 7500 ms 00:09:44.550 Doorbell Stride: 4 bytes 00:09:44.550 NVM Subsystem Reset: Not Supported 00:09:44.550 Command Sets Supported 00:09:44.550 NVM Command Set: Supported 00:09:44.550 Boot Partition: Not Supported 00:09:44.550 Memory Page Size Minimum: 4096 bytes 00:09:44.550 Memory Page Size Maximum: 65536 bytes 00:09:44.550 Persistent Memory Region: Not Supported 00:09:44.550 Optional Asynchronous Events Supported 00:09:44.550 Namespace Attribute Notices: Supported 00:09:44.550 Firmware Activation Notices: Not Supported 00:09:44.550 ANA Change Notices: Not Supported 00:09:44.550 PLE Aggregate Log Change Notices: Not Supported 00:09:44.550 LBA Status Info Alert Notices: Not Supported 00:09:44.550 EGE Aggregate Log Change Notices: Not Supported 00:09:44.550 Normal NVM Subsystem Shutdown event: Not Supported 00:09:44.550 Zone Descriptor Change Notices: Not Supported 00:09:44.550 Discovery Log Change Notices: Not Supported 00:09:44.550 Controller Attributes 00:09:44.550 128-bit Host Identifier: Not Supported 00:09:44.550 Non-Operational Permissive Mode: Not Supported 00:09:44.550 NVM Sets: Not Supported 00:09:44.550 Read Recovery Levels: Not Supported 00:09:44.550 Endurance Groups: Supported 00:09:44.550 Predictable Latency Mode: Not Supported 00:09:44.550 Traffic Based Keep ALive: Not Supported 00:09:44.550 Namespace Granularity: Not Supported 00:09:44.550 SQ Associations: Not Supported 00:09:44.550 UUID List: Not Supported 00:09:44.550 Multi-Domain Subsystem: Not Supported 00:09:44.550 Fixed Capacity Management: Not Supported 00:09:44.550 Variable Capacity Management: Not Supported 00:09:44.550 Delete Endurance Group: Not Supported 00:09:44.550 Delete NVM Set: Not Supported 00:09:44.550 Extended LBA Formats Supported: Supported 00:09:44.550 Flexible Data Placement Supported: Supported 00:09:44.550 00:09:44.550 Controller Memory Buffer Support 00:09:44.550 ================================ 00:09:44.550 Supported: No 00:09:44.550 00:09:44.550 Persistent Memory Region Support 00:09:44.550 ================================ 00:09:44.550 Supported: No 00:09:44.550 00:09:44.550 Admin Command Set Attributes 00:09:44.550 ============================ 00:09:44.550 Security Send/Receive: Not Supported 00:09:44.550 Format NVM: Supported 00:09:44.550 Firmware Activate/Download: Not Supported 00:09:44.550 Namespace Management: Supported 00:09:44.550 Device Self-Test: Not Supported 00:09:44.550 Directives: Supported 00:09:44.550 NVMe-MI: Not Supported 00:09:44.550 Virtualization Management: Not Supported 00:09:44.550 Doorbell Buffer Config: Supported 00:09:44.550 Get LBA Status Capability: Not Supported 00:09:44.550 Command & Feature Lockdown Capability: Not Supported 00:09:44.550 Abort Command Limit: 4 00:09:44.550 Async Event Request Limit: 4 00:09:44.550 Number of Firmware Slots: N/A 00:09:44.550 Firmware Slot 1 Read-Only: N/A 00:09:44.550 Firmware Activation Without Reset: N/A 00:09:44.550 Multiple Update Detection Support: N/A 00:09:44.550 Firmware Update Granularity: No Information Provided 00:09:44.550 Per-Namespace SMART Log: Yes 00:09:44.550 Asymmetric Namespace Access Log Page: Not Supported 00:09:44.550 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:44.550 Command Effects Log Page: Supported 00:09:44.550 Get Log Page Extended Data: Supported 00:09:44.550 Telemetry Log Pages: Not Supported 00:09:44.550 Persistent Event Log Pages: Not Supported 00:09:44.550 Supported Log Pages Log Page: May Support 00:09:44.550 Commands Supported & Effects Log Page: Not Supported 00:09:44.550 Feature Identifiers & Effects Log Page:May Support 00:09:44.550 NVMe-MI Commands & Effects Log Page: May Support 00:09:44.550 Data Area 4 for Telemetry Log: Not Supported 00:09:44.551 Error Log Page Entries Supported: 1 00:09:44.551 Keep Alive: Not Supported 00:09:44.551 00:09:44.551 NVM Command Set Attributes 00:09:44.551 ========================== 00:09:44.551 Submission Queue Entry Size 00:09:44.551 Max: 64 00:09:44.551 Min: 64 00:09:44.551 Completion Queue Entry Size 00:09:44.551 Max: 16 00:09:44.551 Min: 16 00:09:44.551 Number of Namespaces: 256 00:09:44.551 Compare Command: Supported 00:09:44.551 Write Uncorrectable Command: Not Supported 00:09:44.551 Dataset Management Command: Supported 00:09:44.551 Write Zeroes Command: Supported 00:09:44.551 Set Features Save Field: Supported 00:09:44.551 Reservations: Not Supported 00:09:44.551 Timestamp: Supported 00:09:44.551 Copy: Supported 00:09:44.551 Volatile Write Cache: Present 00:09:44.551 Atomic Write Unit (Normal): 1 00:09:44.551 Atomic Write Unit (PFail): 1 00:09:44.551 Atomic Compare & Write Unit: 1 00:09:44.551 Fused Compare & Write: Not Supported 00:09:44.551 Scatter-Gather List 00:09:44.551 SGL Command Set: Supported 00:09:44.551 SGL Keyed: Not Supported 00:09:44.551 SGL Bit Bucket Descriptor: Not Supported 00:09:44.551 SGL Metadata Pointer: Not Supported 00:09:44.551 Oversized SGL: Not Supported 00:09:44.551 SGL Metadata Address: Not Supported 00:09:44.551 SGL Offset: Not Supported 00:09:44.551 Transport SGL Data Block: Not Supported 00:09:44.551 Replay Protected Memory Block: Not Supported 00:09:44.551 00:09:44.551 Firmware Slot Information 00:09:44.551 ========================= 00:09:44.551 Active slot: 1 00:09:44.551 Slot 1 Firmware Revision: 1.0 00:09:44.551 00:09:44.551 00:09:44.551 Commands Supported and Effects 00:09:44.551 ============================== 00:09:44.551 Admin Commands 00:09:44.551 -------------- 00:09:44.551 Delete I/O Submission Queue (00h): Supported 00:09:44.551 Create I/O Submission Queue (01h): Supported 00:09:44.551 Get Log Page (02h): Supported 00:09:44.551 Delete I/O Completion Queue (04h): Supported 00:09:44.551 Create I/O Completion Queue (05h): Supported 00:09:44.551 Identify (06h): Supported 00:09:44.551 Abort (08h): Supported 00:09:44.551 Set Features (09h): Supported 00:09:44.551 Get Features (0Ah): Supported 00:09:44.551 Asynchronous Event Request (0Ch): Supported 00:09:44.551 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:44.551 Directive Send (19h): Supported 00:09:44.551 Directive Receive (1Ah): Supported 00:09:44.551 Virtualization Management (1Ch): Supported 00:09:44.551 Doorbell Buffer Config (7Ch): Supported 00:09:44.551 Format NVM (80h): Supported LBA-Change 00:09:44.551 I/O Commands 00:09:44.551 ------------ 00:09:44.551 Flush (00h): Supported LBA-Change 00:09:44.551 Write (01h): Supported LBA-Change 00:09:44.551 Read (02h): Supported 00:09:44.551 Compare (05h): Supported 00:09:44.551 Write Zeroes (08h): Supported LBA-Change 00:09:44.551 Dataset Management (09h): Supported LBA-Change 00:09:44.551 Unknown (0Ch): Supported 00:09:44.551 Unknown (12h): Supported 00:09:44.551 Copy (19h): Supported LBA-Change 00:09:44.551 Unknown (1Dh): Supported LBA-Change 00:09:44.551 00:09:44.551 Error Log 00:09:44.551 ========= 00:09:44.551 00:09:44.551 Arbitration 00:09:44.551 =========== 00:09:44.551 Arbitration Burst: no limit 00:09:44.551 00:09:44.551 Power Management 00:09:44.551 ================ 00:09:44.551 Number of Power States: 1 00:09:44.551 Current Power State: Power State #0 00:09:44.551 Power State #0: 00:09:44.551 Max Power: 25.00 W 00:09:44.551 Non-Operational State: Operational 00:09:44.551 Entry Latency: 16 microseconds 00:09:44.551 Exit Latency: 4 microseconds 00:09:44.551 Relative Read Throughput: 0 00:09:44.551 Relative Read Latency: 0 00:09:44.551 Relative Write Throughput: 0 00:09:44.551 Relative Write Latency: 0 00:09:44.551 Idle Power: Not Reported 00:09:44.551 Active Power: Not Reported 00:09:44.551 Non-Operational Permissive Mode: Not Supported 00:09:44.551 00:09:44.551 Health Information 00:09:44.551 ================== 00:09:44.551 Critical Warnings: 00:09:44.551 Available Spare Space: OK 00:09:44.551 Temperature: OK 00:09:44.551 Device Reliability: OK 00:09:44.551 Read Only: No 00:09:44.551 Volatile Memory Backup: OK 00:09:44.551 Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:44.551 Available Spare: 0% 00:09:44.551 Available Spare Threshold: 0% 00:09:44.551 Life Percentage Used: 0% 00:09:44.551 Data Units Read: 821 00:09:44.551 Data Units Written: 714 00:09:44.551 Host Read Commands: 35126 00:09:44.551 Host Write Commands: 33716 00:09:44.551 Controller Busy Time: 0 minutes 00:09:44.551 Power Cycles: 0 00:09:44.551 Power On Hours: 0 hours 00:09:44.551 Unsafe Shutdowns: 0 00:09:44.551 Unrecoverable Media Errors: 0 00:09:44.551 Lifetime Error Log Entries: 0 00:09:44.551 Warning Temperature Time: 0 minutes 00:09:44.551 Critical Temperature Time: 0 minutes 00:09:44.551 00:09:44.551 Number of Queues 00:09:44.551 ================ 00:09:44.551 Number of I/O Submission Queues: 64 00:09:44.551 Number of I/O Completion Queues: 64 00:09:44.551 00:09:44.551 ZNS Specific Controller Data 00:09:44.551 ============================ 00:09:44.551 Zone Append Size Limit: 0 00:09:44.551 00:09:44.551 00:09:44.551 Active Namespaces 00:09:44.551 ================= 00:09:44.551 Namespace ID:1 00:09:44.551 Error Recovery Timeout: Unlimited 00:09:44.551 Command Set Identifier: NVM (00h) 00:09:44.551 Deallocate: Supported 00:09:44.551 Deallocated/Unwritten Error: Supported 00:09:44.551 Deallocated Read Value: All 0x00 00:09:44.551 Deallocate in Write Zeroes: Not Supported 00:09:44.551 Deallocated Guard Field: 0xFFFF 00:09:44.551 Flush: Supported 00:09:44.551 Reservation: Not Supported 00:09:44.551 Namespace Sharing Capabilities: Multiple Controllers 00:09:44.551 Size (in LBAs): 262144 (1GiB) 00:09:44.551 Capacity (in LBAs): 262144 (1GiB) 00:09:44.551 Utilization (in LBAs): 262144 (1GiB) 00:09:44.551 Thin Provisioning: Not Supported 00:09:44.551 Per-NS Atomic Units: No 00:09:44.551 Maximum Single Source Range Length: 128 00:09:44.551 Maximum Copy Length: 128 00:09:44.551 Maximum Source Range Count: 128 00:09:44.551 NGUID/EUI64 Never Reused: No 00:09:44.551 Namespace Write Protected: No 00:09:44.551 Endurance group ID: 1 00:09:44.551 Number of LBA Formats: 8 00:09:44.551 Current LBA Format: LBA Format #04 00:09:44.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:44.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:44.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:44.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:44.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:44.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:44.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:44.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:44.551 00:09:44.551 Get Feature FDP: 00:09:44.551 ================ 00:09:44.551 Enabled: Yes 00:09:44.551 FDP configuration index: 0 00:09:44.551 00:09:44.551 FDP configurations log page 00:09:44.551 =========================== 00:09:44.551 Number of FDP configurations: 1 00:09:44.551 Version: 0 00:09:44.551 Size: 112 00:09:44.551 FDP Configuration Descriptor: 0 00:09:44.551 Descriptor Size: 96 00:09:44.551 Reclaim Group Identifier format: 2 00:09:44.551 FDP Volatile Write Cache: Not Present 00:09:44.551 FDP Configuration: Valid 00:09:44.551 Vendor Specific Size: 0 00:09:44.551 Number of Reclaim Groups: 2 00:09:44.551 Number of Recalim Unit Handles: 8 00:09:44.551 Max Placement Identifiers: 128 00:09:44.551 Number of Namespaces Suppprted: 256 00:09:44.551 Reclaim unit Nominal Size: 6000000 bytes 00:09:44.551 Estimated Reclaim Unit Time Limit: Not Reported 00:09:44.551 RUH Desc #000: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #001: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #002: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #003: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #004: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #005: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #006: RUH Type: Initially Isolated 00:09:44.551 RUH Desc #007: RUH Type: Initially Isolated 00:09:44.551 00:09:44.551 FDP reclaim unit handle usage log page 00:09:44.551 ====================================== 00:09:44.551 Number of Reclaim Unit Handles: 8 00:09:44.551 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:44.551 RUH Usage Desc #001: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #002: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #003: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #004: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #005: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #006: RUH Attributes: Unused 00:09:44.551 RUH Usage Desc #007: RUH Attributes: Unused 00:09:44.551 00:09:44.551 FDP statistics log page 00:09:44.551 ======================= 00:09:44.551 Host bytes with metadata written: 450994176 00:09:44.551 Media bytes with metadata written: 451039232 00:09:44.551 Media bytes erased: 0 00:09:44.551 00:09:44.551 FDP events log page 00:09:44.551 =================== 00:09:44.551 Number of FDP events: 0 00:09:44.551 00:09:44.551 NVM Specific Namespace Data 00:09:44.551 =========================== 00:09:44.551 Logical Block Storage Tag Mask: 0 00:09:44.551 Protection Information Capabilities: 00:09:44.551 16b Guard Protection Information Storage Tag Support: No 00:09:44.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:44.551 Storage Tag Check Read Support: No 00:09:44.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:44.551 00:09:44.551 real 0m1.603s 00:09:44.551 user 0m0.692s 00:09:44.551 sys 0m0.705s 00:09:44.551 04:58:59 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.551 04:58:59 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:44.551 ************************************ 00:09:44.551 END TEST nvme_identify 00:09:44.551 ************************************ 00:09:44.551 04:58:59 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:44.551 04:58:59 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.551 04:58:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.551 04:58:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.551 ************************************ 00:09:44.551 START TEST nvme_perf 00:09:44.551 ************************************ 00:09:44.551 04:58:59 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:09:44.551 04:58:59 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:45.930 Initializing NVMe Controllers 00:09:45.930 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:45.930 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:45.930 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:45.930 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:45.930 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:45.930 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:45.930 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:45.930 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:45.930 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:45.930 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:45.930 Initialization complete. Launching workers. 00:09:45.930 ======================================================== 00:09:45.930 Latency(us) 00:09:45.930 Device Information : IOPS MiB/s Average min max 00:09:45.930 PCIE (0000:00:10.0) NSID 1 from core 0: 13480.97 157.98 9503.88 7775.26 42234.11 00:09:45.930 PCIE (0000:00:11.0) NSID 1 from core 0: 13480.97 157.98 9480.24 7744.75 39666.14 00:09:45.930 PCIE (0000:00:13.0) NSID 1 from core 0: 13480.97 157.98 9454.73 7882.70 37650.44 00:09:45.930 PCIE (0000:00:12.0) NSID 1 from core 0: 13480.97 157.98 9429.03 7867.83 35008.68 00:09:45.930 PCIE (0000:00:12.0) NSID 2 from core 0: 13480.97 157.98 9404.10 7927.04 32462.15 00:09:45.930 PCIE (0000:00:12.0) NSID 3 from core 0: 13480.97 157.98 9379.60 7895.07 29663.94 00:09:45.930 ======================================================== 00:09:45.930 Total : 80885.85 947.88 9441.93 7744.75 42234.11 00:09:45.930 00:09:45.930 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:45.930 ================================================================================= 00:09:45.930 1.00000% : 8043.055us 00:09:45.930 10.00000% : 8340.945us 00:09:45.930 25.00000% : 8638.836us 00:09:45.930 50.00000% : 9115.462us 00:09:45.930 75.00000% : 9770.822us 00:09:45.930 90.00000% : 10426.182us 00:09:45.930 95.00000% : 10783.651us 00:09:45.930 98.00000% : 11498.589us 00:09:45.930 99.00000% : 12511.418us 00:09:45.930 99.50000% : 34317.033us 00:09:45.930 99.90000% : 41704.727us 00:09:45.930 99.99000% : 42181.353us 00:09:45.930 99.99900% : 42419.665us 00:09:45.930 99.99990% : 42419.665us 00:09:45.930 99.99999% : 42419.665us 00:09:45.930 00:09:45.930 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:45.931 ================================================================================= 00:09:45.931 1.00000% : 8102.633us 00:09:45.931 10.00000% : 8400.524us 00:09:45.931 25.00000% : 8698.415us 00:09:45.931 50.00000% : 9115.462us 00:09:45.931 75.00000% : 9830.400us 00:09:45.931 90.00000% : 10366.604us 00:09:45.931 95.00000% : 10664.495us 00:09:45.931 98.00000% : 11379.433us 00:09:45.931 99.00000% : 12094.371us 00:09:45.931 99.50000% : 32172.218us 00:09:45.931 99.90000% : 39321.600us 00:09:45.931 99.99000% : 39798.225us 00:09:45.931 99.99900% : 39798.225us 00:09:45.931 99.99990% : 39798.225us 00:09:45.931 99.99999% : 39798.225us 00:09:45.931 00:09:45.931 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:45.931 ================================================================================= 00:09:45.931 1.00000% : 8102.633us 00:09:45.931 10.00000% : 8400.524us 00:09:45.931 25.00000% : 8698.415us 00:09:45.931 50.00000% : 9115.462us 00:09:45.931 75.00000% : 9770.822us 00:09:45.931 90.00000% : 10366.604us 00:09:45.931 95.00000% : 10724.073us 00:09:45.931 98.00000% : 11439.011us 00:09:45.931 99.00000% : 12153.949us 00:09:45.931 99.50000% : 30027.404us 00:09:45.931 99.90000% : 37176.785us 00:09:45.931 99.99000% : 37653.411us 00:09:45.931 99.99900% : 37653.411us 00:09:45.931 99.99990% : 37653.411us 00:09:45.931 99.99999% : 37653.411us 00:09:45.931 00:09:45.931 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:45.931 ================================================================================= 00:09:45.931 1.00000% : 8102.633us 00:09:45.931 10.00000% : 8400.524us 00:09:45.931 25.00000% : 8698.415us 00:09:45.931 50.00000% : 9115.462us 00:09:45.931 75.00000% : 9770.822us 00:09:45.931 90.00000% : 10366.604us 00:09:45.931 95.00000% : 10724.073us 00:09:45.931 98.00000% : 11439.011us 00:09:45.931 99.00000% : 12153.949us 00:09:45.931 99.50000% : 27405.964us 00:09:45.931 99.90000% : 34555.345us 00:09:45.931 99.99000% : 35031.971us 00:09:45.931 99.99900% : 35031.971us 00:09:45.931 99.99990% : 35031.971us 00:09:45.931 99.99999% : 35031.971us 00:09:45.931 00:09:45.931 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:45.931 ================================================================================= 00:09:45.931 1.00000% : 8102.633us 00:09:45.931 10.00000% : 8400.524us 00:09:45.931 25.00000% : 8698.415us 00:09:45.931 50.00000% : 9115.462us 00:09:45.931 75.00000% : 9770.822us 00:09:45.931 90.00000% : 10366.604us 00:09:45.931 95.00000% : 10664.495us 00:09:45.931 98.00000% : 11439.011us 00:09:45.931 99.00000% : 12153.949us 00:09:45.931 99.50000% : 24903.680us 00:09:45.931 99.90000% : 32172.218us 00:09:45.931 99.99000% : 32648.844us 00:09:45.931 99.99900% : 32648.844us 00:09:45.931 99.99990% : 32648.844us 00:09:45.931 99.99999% : 32648.844us 00:09:45.931 00:09:45.931 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:45.931 ================================================================================= 00:09:45.931 1.00000% : 8102.633us 00:09:45.931 10.00000% : 8400.524us 00:09:45.931 25.00000% : 8698.415us 00:09:45.931 50.00000% : 9115.462us 00:09:45.931 75.00000% : 9770.822us 00:09:45.931 90.00000% : 10366.604us 00:09:45.931 95.00000% : 10664.495us 00:09:45.931 98.00000% : 11319.855us 00:09:45.931 99.00000% : 12094.371us 00:09:45.931 99.50000% : 22282.240us 00:09:45.931 99.90000% : 29312.465us 00:09:45.931 99.99000% : 29669.935us 00:09:45.931 99.99900% : 29669.935us 00:09:45.931 99.99990% : 29669.935us 00:09:45.931 99.99999% : 29669.935us 00:09:45.931 00:09:45.931 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:45.931 ============================================================================== 00:09:45.931 Range in us Cumulative IO count 00:09:45.931 7745.164 - 7804.742: 0.0222% ( 3) 00:09:45.931 7804.742 - 7864.320: 0.0963% ( 10) 00:09:45.931 7864.320 - 7923.898: 0.3406% ( 33) 00:09:45.931 7923.898 - 7983.476: 0.8146% ( 64) 00:09:45.931 7983.476 - 8043.055: 1.6217% ( 109) 00:09:45.931 8043.055 - 8102.633: 2.7992% ( 159) 00:09:45.931 8102.633 - 8162.211: 4.2506% ( 196) 00:09:45.931 8162.211 - 8221.789: 6.0575% ( 244) 00:09:45.931 8221.789 - 8281.367: 8.1087% ( 277) 00:09:45.931 8281.367 - 8340.945: 10.6265% ( 340) 00:09:45.931 8340.945 - 8400.524: 13.2331% ( 352) 00:09:45.931 8400.524 - 8460.102: 16.1211% ( 390) 00:09:45.931 8460.102 - 8519.680: 19.0314% ( 393) 00:09:45.931 8519.680 - 8579.258: 21.9935% ( 400) 00:09:45.931 8579.258 - 8638.836: 25.2148% ( 435) 00:09:45.931 8638.836 - 8698.415: 28.3694% ( 426) 00:09:45.931 8698.415 - 8757.993: 31.4796% ( 420) 00:09:45.931 8757.993 - 8817.571: 34.7379% ( 440) 00:09:45.931 8817.571 - 8877.149: 37.9591% ( 435) 00:09:45.931 8877.149 - 8936.727: 41.1656% ( 433) 00:09:45.931 8936.727 - 8996.305: 44.4017% ( 437) 00:09:45.931 8996.305 - 9055.884: 47.4822% ( 416) 00:09:45.931 9055.884 - 9115.462: 50.4591% ( 402) 00:09:45.931 9115.462 - 9175.040: 53.1176% ( 359) 00:09:45.931 9175.040 - 9234.618: 55.7020% ( 349) 00:09:45.931 9234.618 - 9294.196: 58.2864% ( 349) 00:09:45.931 9294.196 - 9353.775: 60.6931% ( 325) 00:09:45.931 9353.775 - 9413.353: 63.0480% ( 318) 00:09:45.931 9413.353 - 9472.931: 65.4547% ( 325) 00:09:45.931 9472.931 - 9532.509: 67.7207% ( 306) 00:09:45.931 9532.509 - 9592.087: 69.8978% ( 294) 00:09:45.931 9592.087 - 9651.665: 71.9416% ( 276) 00:09:45.931 9651.665 - 9711.244: 73.7189% ( 240) 00:09:45.931 9711.244 - 9770.822: 75.3036% ( 214) 00:09:45.931 9770.822 - 9830.400: 76.8809% ( 213) 00:09:45.931 9830.400 - 9889.978: 78.4212% ( 208) 00:09:45.931 9889.978 - 9949.556: 79.7764% ( 183) 00:09:45.931 9949.556 - 10009.135: 81.1611% ( 187) 00:09:45.931 10009.135 - 10068.713: 82.5681% ( 190) 00:09:45.931 10068.713 - 10128.291: 83.9307% ( 184) 00:09:45.931 10128.291 - 10187.869: 85.2858% ( 183) 00:09:45.931 10187.869 - 10247.447: 86.5892% ( 176) 00:09:45.931 10247.447 - 10307.025: 87.8851% ( 175) 00:09:45.931 10307.025 - 10366.604: 89.1143% ( 166) 00:09:45.931 10366.604 - 10426.182: 90.2251% ( 150) 00:09:45.931 10426.182 - 10485.760: 91.3137% ( 147) 00:09:45.931 10485.760 - 10545.338: 92.2690% ( 129) 00:09:45.931 10545.338 - 10604.916: 93.1650% ( 121) 00:09:45.931 10604.916 - 10664.495: 94.0092% ( 114) 00:09:45.931 10664.495 - 10724.073: 94.6608% ( 88) 00:09:45.931 10724.073 - 10783.651: 95.2088% ( 74) 00:09:45.931 10783.651 - 10843.229: 95.6531% ( 60) 00:09:45.931 10843.229 - 10902.807: 96.0604% ( 55) 00:09:45.931 10902.807 - 10962.385: 96.4011% ( 46) 00:09:45.931 10962.385 - 11021.964: 96.7195% ( 43) 00:09:45.931 11021.964 - 11081.542: 96.9046% ( 25) 00:09:45.931 11081.542 - 11141.120: 97.1046% ( 27) 00:09:45.931 11141.120 - 11200.698: 97.3341% ( 31) 00:09:45.931 11200.698 - 11260.276: 97.4896% ( 21) 00:09:45.931 11260.276 - 11319.855: 97.6674% ( 24) 00:09:45.931 11319.855 - 11379.433: 97.8155% ( 20) 00:09:45.931 11379.433 - 11439.011: 97.9265% ( 15) 00:09:45.931 11439.011 - 11498.589: 98.0524% ( 17) 00:09:45.931 11498.589 - 11558.167: 98.1783% ( 17) 00:09:45.931 11558.167 - 11617.745: 98.2598% ( 11) 00:09:45.931 11617.745 - 11677.324: 98.3486% ( 12) 00:09:45.931 11677.324 - 11736.902: 98.4301% ( 11) 00:09:45.931 11736.902 - 11796.480: 98.5190% ( 12) 00:09:45.931 11796.480 - 11856.058: 98.5634% ( 6) 00:09:45.931 11856.058 - 11915.636: 98.6226% ( 8) 00:09:45.931 11915.636 - 11975.215: 98.6893% ( 9) 00:09:45.931 11975.215 - 12034.793: 98.7263% ( 5) 00:09:45.931 12034.793 - 12094.371: 98.8078% ( 11) 00:09:45.931 12094.371 - 12153.949: 98.8522% ( 6) 00:09:45.931 12153.949 - 12213.527: 98.9040% ( 7) 00:09:45.931 12213.527 - 12273.105: 98.9262% ( 3) 00:09:45.931 12273.105 - 12332.684: 98.9633% ( 5) 00:09:45.931 12332.684 - 12392.262: 98.9781% ( 2) 00:09:45.931 12392.262 - 12451.840: 98.9929% ( 2) 00:09:45.931 12451.840 - 12511.418: 99.0151% ( 3) 00:09:45.931 12511.418 - 12570.996: 99.0447% ( 4) 00:09:45.931 12570.996 - 12630.575: 99.0521% ( 1) 00:09:45.931 31457.280 - 31695.593: 99.0595% ( 1) 00:09:45.931 31695.593 - 31933.905: 99.0966% ( 5) 00:09:45.931 31933.905 - 32172.218: 99.1410% ( 6) 00:09:45.931 32172.218 - 32410.531: 99.1854% ( 6) 00:09:45.931 32410.531 - 32648.844: 99.2225% ( 5) 00:09:45.931 32648.844 - 32887.156: 99.2669% ( 6) 00:09:45.931 32887.156 - 33125.469: 99.3039% ( 5) 00:09:45.931 33125.469 - 33363.782: 99.3483% ( 6) 00:09:45.931 33363.782 - 33602.095: 99.3854% ( 5) 00:09:45.931 33602.095 - 33840.407: 99.4224% ( 5) 00:09:45.931 33840.407 - 34078.720: 99.4742% ( 7) 00:09:45.931 34078.720 - 34317.033: 99.5187% ( 6) 00:09:45.931 34317.033 - 34555.345: 99.5261% ( 1) 00:09:45.931 39321.600 - 39559.913: 99.5335% ( 1) 00:09:45.931 39559.913 - 39798.225: 99.5705% ( 5) 00:09:45.931 39798.225 - 40036.538: 99.6149% ( 6) 00:09:45.931 40036.538 - 40274.851: 99.6520% ( 5) 00:09:45.931 40274.851 - 40513.164: 99.6890% ( 5) 00:09:45.931 40513.164 - 40751.476: 99.7408% ( 7) 00:09:45.931 40751.476 - 40989.789: 99.7852% ( 6) 00:09:45.931 40989.789 - 41228.102: 99.8223% ( 5) 00:09:45.931 41228.102 - 41466.415: 99.8593% ( 5) 00:09:45.932 41466.415 - 41704.727: 99.9037% ( 6) 00:09:45.932 41704.727 - 41943.040: 99.9556% ( 7) 00:09:45.932 41943.040 - 42181.353: 99.9926% ( 5) 00:09:45.932 42181.353 - 42419.665: 100.0000% ( 1) 00:09:45.932 00:09:45.932 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:45.932 ============================================================================== 00:09:45.932 Range in us Cumulative IO count 00:09:45.932 7685.585 - 7745.164: 0.0074% ( 1) 00:09:45.932 7745.164 - 7804.742: 0.0296% ( 3) 00:09:45.932 7804.742 - 7864.320: 0.0741% ( 6) 00:09:45.932 7864.320 - 7923.898: 0.1555% ( 11) 00:09:45.932 7923.898 - 7983.476: 0.3629% ( 28) 00:09:45.932 7983.476 - 8043.055: 0.7405% ( 51) 00:09:45.932 8043.055 - 8102.633: 1.4144% ( 91) 00:09:45.932 8102.633 - 8162.211: 2.6214% ( 163) 00:09:45.932 8162.211 - 8221.789: 4.1321% ( 204) 00:09:45.932 8221.789 - 8281.367: 6.0278% ( 256) 00:09:45.932 8281.367 - 8340.945: 8.2568% ( 301) 00:09:45.932 8340.945 - 8400.524: 10.8560% ( 351) 00:09:45.932 8400.524 - 8460.102: 13.7959% ( 397) 00:09:45.932 8460.102 - 8519.680: 16.9950% ( 432) 00:09:45.932 8519.680 - 8579.258: 20.4162% ( 462) 00:09:45.932 8579.258 - 8638.836: 24.0299% ( 488) 00:09:45.932 8638.836 - 8698.415: 27.6066% ( 483) 00:09:45.932 8698.415 - 8757.993: 31.3092% ( 500) 00:09:45.932 8757.993 - 8817.571: 35.0415% ( 504) 00:09:45.932 8817.571 - 8877.149: 38.7070% ( 495) 00:09:45.932 8877.149 - 8936.727: 42.3282% ( 489) 00:09:45.932 8936.727 - 8996.305: 45.7790% ( 466) 00:09:45.932 8996.305 - 9055.884: 48.8966% ( 421) 00:09:45.932 9055.884 - 9115.462: 51.7328% ( 383) 00:09:45.932 9115.462 - 9175.040: 54.2432% ( 339) 00:09:45.932 9175.040 - 9234.618: 56.7313% ( 336) 00:09:45.932 9234.618 - 9294.196: 59.0788% ( 317) 00:09:45.932 9294.196 - 9353.775: 61.3744% ( 310) 00:09:45.932 9353.775 - 9413.353: 63.4627% ( 282) 00:09:45.932 9413.353 - 9472.931: 65.4843% ( 273) 00:09:45.932 9472.931 - 9532.509: 67.3726% ( 255) 00:09:45.932 9532.509 - 9592.087: 69.3943% ( 273) 00:09:45.932 9592.087 - 9651.665: 71.1493% ( 237) 00:09:45.932 9651.665 - 9711.244: 72.9191% ( 239) 00:09:45.932 9711.244 - 9770.822: 74.5705% ( 223) 00:09:45.932 9770.822 - 9830.400: 76.2663% ( 229) 00:09:45.932 9830.400 - 9889.978: 77.9473% ( 227) 00:09:45.932 9889.978 - 9949.556: 79.5172% ( 212) 00:09:45.932 9949.556 - 10009.135: 81.1537% ( 221) 00:09:45.932 10009.135 - 10068.713: 82.7755% ( 219) 00:09:45.932 10068.713 - 10128.291: 84.3454% ( 212) 00:09:45.932 10128.291 - 10187.869: 85.9671% ( 219) 00:09:45.932 10187.869 - 10247.447: 87.4185% ( 196) 00:09:45.932 10247.447 - 10307.025: 88.8181% ( 189) 00:09:45.932 10307.025 - 10366.604: 90.2029% ( 187) 00:09:45.932 10366.604 - 10426.182: 91.4100% ( 163) 00:09:45.932 10426.182 - 10485.760: 92.4985% ( 147) 00:09:45.932 10485.760 - 10545.338: 93.5278% ( 139) 00:09:45.932 10545.338 - 10604.916: 94.3572% ( 112) 00:09:45.932 10604.916 - 10664.495: 95.0755% ( 97) 00:09:45.932 10664.495 - 10724.073: 95.5421% ( 63) 00:09:45.932 10724.073 - 10783.651: 96.0308% ( 66) 00:09:45.932 10783.651 - 10843.229: 96.3492% ( 43) 00:09:45.932 10843.229 - 10902.807: 96.6751% ( 44) 00:09:45.932 10902.807 - 10962.385: 96.8898% ( 29) 00:09:45.932 10962.385 - 11021.964: 97.1194% ( 31) 00:09:45.932 11021.964 - 11081.542: 97.3119% ( 26) 00:09:45.932 11081.542 - 11141.120: 97.5341% ( 30) 00:09:45.932 11141.120 - 11200.698: 97.7044% ( 23) 00:09:45.932 11200.698 - 11260.276: 97.8377% ( 18) 00:09:45.932 11260.276 - 11319.855: 97.9784% ( 19) 00:09:45.932 11319.855 - 11379.433: 98.0895% ( 15) 00:09:45.932 11379.433 - 11439.011: 98.1931% ( 14) 00:09:45.932 11439.011 - 11498.589: 98.2968% ( 14) 00:09:45.932 11498.589 - 11558.167: 98.3931% ( 13) 00:09:45.932 11558.167 - 11617.745: 98.4819% ( 12) 00:09:45.932 11617.745 - 11677.324: 98.5856% ( 14) 00:09:45.932 11677.324 - 11736.902: 98.6819% ( 13) 00:09:45.932 11736.902 - 11796.480: 98.7633% ( 11) 00:09:45.932 11796.480 - 11856.058: 98.8152% ( 7) 00:09:45.932 11856.058 - 11915.636: 98.8670% ( 7) 00:09:45.932 11915.636 - 11975.215: 98.9114% ( 6) 00:09:45.932 11975.215 - 12034.793: 98.9707% ( 8) 00:09:45.932 12034.793 - 12094.371: 99.0151% ( 6) 00:09:45.932 12094.371 - 12153.949: 99.0521% ( 5) 00:09:45.932 29669.935 - 29789.091: 99.0595% ( 1) 00:09:45.932 29789.091 - 29908.247: 99.0743% ( 2) 00:09:45.932 29908.247 - 30027.404: 99.0966% ( 3) 00:09:45.932 30027.404 - 30146.560: 99.1262% ( 4) 00:09:45.932 30146.560 - 30265.716: 99.1410% ( 2) 00:09:45.932 30265.716 - 30384.873: 99.1706% ( 4) 00:09:45.932 30384.873 - 30504.029: 99.1928% ( 3) 00:09:45.932 30504.029 - 30742.342: 99.2373% ( 6) 00:09:45.932 30742.342 - 30980.655: 99.2817% ( 6) 00:09:45.932 30980.655 - 31218.967: 99.3261% ( 6) 00:09:45.932 31218.967 - 31457.280: 99.3706% ( 6) 00:09:45.932 31457.280 - 31695.593: 99.4150% ( 6) 00:09:45.932 31695.593 - 31933.905: 99.4594% ( 6) 00:09:45.932 31933.905 - 32172.218: 99.5113% ( 7) 00:09:45.932 32172.218 - 32410.531: 99.5261% ( 2) 00:09:45.932 37176.785 - 37415.098: 99.5705% ( 6) 00:09:45.932 37415.098 - 37653.411: 99.6223% ( 7) 00:09:45.932 37653.411 - 37891.724: 99.6520% ( 4) 00:09:45.932 37891.724 - 38130.036: 99.6964% ( 6) 00:09:45.932 38130.036 - 38368.349: 99.7482% ( 7) 00:09:45.932 38368.349 - 38606.662: 99.7927% ( 6) 00:09:45.932 38606.662 - 38844.975: 99.8371% ( 6) 00:09:45.932 38844.975 - 39083.287: 99.8815% ( 6) 00:09:45.932 39083.287 - 39321.600: 99.9334% ( 7) 00:09:45.932 39321.600 - 39559.913: 99.9704% ( 5) 00:09:45.932 39559.913 - 39798.225: 100.0000% ( 4) 00:09:45.932 00:09:45.932 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:45.932 ============================================================================== 00:09:45.932 Range in us Cumulative IO count 00:09:45.932 7864.320 - 7923.898: 0.0370% ( 5) 00:09:45.932 7923.898 - 7983.476: 0.1629% ( 17) 00:09:45.932 7983.476 - 8043.055: 0.4295% ( 36) 00:09:45.932 8043.055 - 8102.633: 1.0960% ( 90) 00:09:45.932 8102.633 - 8162.211: 2.2364% ( 154) 00:09:45.932 8162.211 - 8221.789: 3.7767% ( 208) 00:09:45.932 8221.789 - 8281.367: 5.6650% ( 255) 00:09:45.932 8281.367 - 8340.945: 8.0421% ( 321) 00:09:45.932 8340.945 - 8400.524: 10.7376% ( 364) 00:09:45.932 8400.524 - 8460.102: 13.6404% ( 392) 00:09:45.932 8460.102 - 8519.680: 16.9431% ( 446) 00:09:45.932 8519.680 - 8579.258: 20.4976% ( 480) 00:09:45.932 8579.258 - 8638.836: 24.0892% ( 485) 00:09:45.932 8638.836 - 8698.415: 27.7621% ( 496) 00:09:45.932 8698.415 - 8757.993: 31.5240% ( 508) 00:09:45.932 8757.993 - 8817.571: 35.2932% ( 509) 00:09:45.932 8817.571 - 8877.149: 39.0329% ( 505) 00:09:45.932 8877.149 - 8936.727: 42.7355% ( 500) 00:09:45.932 8936.727 - 8996.305: 46.2456% ( 474) 00:09:45.932 8996.305 - 9055.884: 49.3483% ( 419) 00:09:45.932 9055.884 - 9115.462: 52.1845% ( 383) 00:09:45.932 9115.462 - 9175.040: 54.7986% ( 353) 00:09:45.932 9175.040 - 9234.618: 57.2867% ( 336) 00:09:45.932 9234.618 - 9294.196: 59.6786% ( 323) 00:09:45.932 9294.196 - 9353.775: 61.9964% ( 313) 00:09:45.932 9353.775 - 9413.353: 64.2328% ( 302) 00:09:45.932 9413.353 - 9472.931: 66.3951% ( 292) 00:09:45.932 9472.931 - 9532.509: 68.3945% ( 270) 00:09:45.932 9532.509 - 9592.087: 70.2977% ( 257) 00:09:45.932 9592.087 - 9651.665: 72.0527% ( 237) 00:09:45.932 9651.665 - 9711.244: 73.7559% ( 230) 00:09:45.932 9711.244 - 9770.822: 75.3851% ( 220) 00:09:45.932 9770.822 - 9830.400: 76.9476% ( 211) 00:09:45.932 9830.400 - 9889.978: 78.5397% ( 215) 00:09:45.932 9889.978 - 9949.556: 80.1096% ( 212) 00:09:45.932 9949.556 - 10009.135: 81.7165% ( 217) 00:09:45.932 10009.135 - 10068.713: 83.2642% ( 209) 00:09:45.932 10068.713 - 10128.291: 84.7453% ( 200) 00:09:45.932 10128.291 - 10187.869: 86.2337% ( 201) 00:09:45.932 10187.869 - 10247.447: 87.6185% ( 187) 00:09:45.932 10247.447 - 10307.025: 89.0551% ( 194) 00:09:45.932 10307.025 - 10366.604: 90.2918% ( 167) 00:09:45.932 10366.604 - 10426.182: 91.4914% ( 162) 00:09:45.932 10426.182 - 10485.760: 92.4985% ( 136) 00:09:45.932 10485.760 - 10545.338: 93.3945% ( 121) 00:09:45.932 10545.338 - 10604.916: 94.2165% ( 111) 00:09:45.932 10604.916 - 10664.495: 94.8608% ( 87) 00:09:45.932 10664.495 - 10724.073: 95.3940% ( 72) 00:09:45.932 10724.073 - 10783.651: 95.8383% ( 60) 00:09:45.932 10783.651 - 10843.229: 96.2307% ( 53) 00:09:45.932 10843.229 - 10902.807: 96.5566% ( 44) 00:09:45.932 10902.807 - 10962.385: 96.8454% ( 39) 00:09:45.932 10962.385 - 11021.964: 97.0675% ( 30) 00:09:45.932 11021.964 - 11081.542: 97.2453% ( 24) 00:09:45.932 11081.542 - 11141.120: 97.4008% ( 21) 00:09:45.932 11141.120 - 11200.698: 97.5563% ( 21) 00:09:45.932 11200.698 - 11260.276: 97.6970% ( 19) 00:09:45.932 11260.276 - 11319.855: 97.8377% ( 19) 00:09:45.932 11319.855 - 11379.433: 97.9339% ( 13) 00:09:45.933 11379.433 - 11439.011: 98.0302% ( 13) 00:09:45.933 11439.011 - 11498.589: 98.1191% ( 12) 00:09:45.933 11498.589 - 11558.167: 98.2227% ( 14) 00:09:45.933 11558.167 - 11617.745: 98.3264% ( 14) 00:09:45.933 11617.745 - 11677.324: 98.4301% ( 14) 00:09:45.933 11677.324 - 11736.902: 98.5264% ( 13) 00:09:45.933 11736.902 - 11796.480: 98.6152% ( 12) 00:09:45.933 11796.480 - 11856.058: 98.7041% ( 12) 00:09:45.933 11856.058 - 11915.636: 98.7707% ( 9) 00:09:45.933 11915.636 - 11975.215: 98.8448% ( 10) 00:09:45.933 11975.215 - 12034.793: 98.8966% ( 7) 00:09:45.933 12034.793 - 12094.371: 98.9485% ( 7) 00:09:45.933 12094.371 - 12153.949: 99.0003% ( 7) 00:09:45.933 12153.949 - 12213.527: 99.0447% ( 6) 00:09:45.933 12213.527 - 12273.105: 99.0521% ( 1) 00:09:45.933 27525.120 - 27644.276: 99.0669% ( 2) 00:09:45.933 27644.276 - 27763.433: 99.0892% ( 3) 00:09:45.933 27763.433 - 27882.589: 99.1040% ( 2) 00:09:45.933 27882.589 - 28001.745: 99.1336% ( 4) 00:09:45.933 28001.745 - 28120.902: 99.1484% ( 2) 00:09:45.933 28120.902 - 28240.058: 99.1706% ( 3) 00:09:45.933 28240.058 - 28359.215: 99.1928% ( 3) 00:09:45.933 28359.215 - 28478.371: 99.2150% ( 3) 00:09:45.933 28478.371 - 28597.527: 99.2373% ( 3) 00:09:45.933 28597.527 - 28716.684: 99.2595% ( 3) 00:09:45.933 28716.684 - 28835.840: 99.2817% ( 3) 00:09:45.933 28835.840 - 28954.996: 99.3039% ( 3) 00:09:45.933 28954.996 - 29074.153: 99.3335% ( 4) 00:09:45.933 29074.153 - 29193.309: 99.3557% ( 3) 00:09:45.933 29193.309 - 29312.465: 99.3780% ( 3) 00:09:45.933 29312.465 - 29431.622: 99.4002% ( 3) 00:09:45.933 29431.622 - 29550.778: 99.4224% ( 3) 00:09:45.933 29550.778 - 29669.935: 99.4446% ( 3) 00:09:45.933 29669.935 - 29789.091: 99.4668% ( 3) 00:09:45.933 29789.091 - 29908.247: 99.4890% ( 3) 00:09:45.933 29908.247 - 30027.404: 99.5187% ( 4) 00:09:45.933 30027.404 - 30146.560: 99.5261% ( 1) 00:09:45.933 35031.971 - 35270.284: 99.5557% ( 4) 00:09:45.933 35270.284 - 35508.596: 99.6001% ( 6) 00:09:45.933 35508.596 - 35746.909: 99.6445% ( 6) 00:09:45.933 35746.909 - 35985.222: 99.6816% ( 5) 00:09:45.933 35985.222 - 36223.535: 99.7260% ( 6) 00:09:45.933 36223.535 - 36461.847: 99.7704% ( 6) 00:09:45.933 36461.847 - 36700.160: 99.8075% ( 5) 00:09:45.933 36700.160 - 36938.473: 99.8519% ( 6) 00:09:45.933 36938.473 - 37176.785: 99.9037% ( 7) 00:09:45.933 37176.785 - 37415.098: 99.9482% ( 6) 00:09:45.933 37415.098 - 37653.411: 100.0000% ( 7) 00:09:45.933 00:09:45.933 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:45.933 ============================================================================== 00:09:45.933 Range in us Cumulative IO count 00:09:45.933 7864.320 - 7923.898: 0.0666% ( 9) 00:09:45.933 7923.898 - 7983.476: 0.1999% ( 18) 00:09:45.933 7983.476 - 8043.055: 0.5036% ( 41) 00:09:45.933 8043.055 - 8102.633: 1.1996% ( 94) 00:09:45.933 8102.633 - 8162.211: 2.3178% ( 151) 00:09:45.933 8162.211 - 8221.789: 3.8803% ( 211) 00:09:45.933 8221.789 - 8281.367: 5.9094% ( 274) 00:09:45.933 8281.367 - 8340.945: 8.0273% ( 286) 00:09:45.933 8340.945 - 8400.524: 10.7598% ( 369) 00:09:45.933 8400.524 - 8460.102: 13.7293% ( 401) 00:09:45.933 8460.102 - 8519.680: 16.9209% ( 431) 00:09:45.933 8519.680 - 8579.258: 20.3273% ( 460) 00:09:45.933 8579.258 - 8638.836: 24.0151% ( 498) 00:09:45.933 8638.836 - 8698.415: 27.7251% ( 501) 00:09:45.933 8698.415 - 8757.993: 31.4722% ( 506) 00:09:45.933 8757.993 - 8817.571: 35.2562% ( 511) 00:09:45.933 8817.571 - 8877.149: 39.0625% ( 514) 00:09:45.933 8877.149 - 8936.727: 42.7725% ( 501) 00:09:45.933 8936.727 - 8996.305: 46.2382% ( 468) 00:09:45.933 8996.305 - 9055.884: 49.3780% ( 424) 00:09:45.933 9055.884 - 9115.462: 52.1919% ( 380) 00:09:45.933 9115.462 - 9175.040: 54.8800% ( 363) 00:09:45.933 9175.040 - 9234.618: 57.3534% ( 334) 00:09:45.933 9234.618 - 9294.196: 59.7749% ( 327) 00:09:45.933 9294.196 - 9353.775: 62.1001% ( 314) 00:09:45.933 9353.775 - 9413.353: 64.3439% ( 303) 00:09:45.933 9413.353 - 9472.931: 66.3359% ( 269) 00:09:45.933 9472.931 - 9532.509: 68.3131% ( 267) 00:09:45.933 9532.509 - 9592.087: 70.1940% ( 254) 00:09:45.933 9592.087 - 9651.665: 71.9935% ( 243) 00:09:45.933 9651.665 - 9711.244: 73.7115% ( 232) 00:09:45.933 9711.244 - 9770.822: 75.4739% ( 238) 00:09:45.933 9770.822 - 9830.400: 77.0661% ( 215) 00:09:45.933 9830.400 - 9889.978: 78.6656% ( 216) 00:09:45.933 9889.978 - 9949.556: 80.2503% ( 214) 00:09:45.933 9949.556 - 10009.135: 81.7758% ( 206) 00:09:45.933 10009.135 - 10068.713: 83.3012% ( 206) 00:09:45.933 10068.713 - 10128.291: 84.7971% ( 202) 00:09:45.933 10128.291 - 10187.869: 86.2485% ( 196) 00:09:45.933 10187.869 - 10247.447: 87.6185% ( 185) 00:09:45.933 10247.447 - 10307.025: 88.9662% ( 182) 00:09:45.933 10307.025 - 10366.604: 90.2547% ( 174) 00:09:45.933 10366.604 - 10426.182: 91.4322% ( 159) 00:09:45.933 10426.182 - 10485.760: 92.4837% ( 142) 00:09:45.933 10485.760 - 10545.338: 93.4316% ( 128) 00:09:45.933 10545.338 - 10604.916: 94.2536% ( 111) 00:09:45.933 10604.916 - 10664.495: 94.9200% ( 90) 00:09:45.933 10664.495 - 10724.073: 95.4606% ( 73) 00:09:45.933 10724.073 - 10783.651: 95.9271% ( 63) 00:09:45.933 10783.651 - 10843.229: 96.2752% ( 47) 00:09:45.933 10843.229 - 10902.807: 96.5936% ( 43) 00:09:45.933 10902.807 - 10962.385: 96.8454% ( 34) 00:09:45.933 10962.385 - 11021.964: 97.0898% ( 33) 00:09:45.933 11021.964 - 11081.542: 97.2527% ( 22) 00:09:45.933 11081.542 - 11141.120: 97.4082% ( 21) 00:09:45.933 11141.120 - 11200.698: 97.5785% ( 23) 00:09:45.933 11200.698 - 11260.276: 97.7192% ( 19) 00:09:45.933 11260.276 - 11319.855: 97.8599% ( 19) 00:09:45.933 11319.855 - 11379.433: 97.9710% ( 15) 00:09:45.933 11379.433 - 11439.011: 98.0820% ( 15) 00:09:45.933 11439.011 - 11498.589: 98.1635% ( 11) 00:09:45.933 11498.589 - 11558.167: 98.2672% ( 14) 00:09:45.933 11558.167 - 11617.745: 98.3634% ( 13) 00:09:45.933 11617.745 - 11677.324: 98.4671% ( 14) 00:09:45.933 11677.324 - 11736.902: 98.5560% ( 12) 00:09:45.933 11736.902 - 11796.480: 98.6597% ( 14) 00:09:45.933 11796.480 - 11856.058: 98.7337% ( 10) 00:09:45.933 11856.058 - 11915.636: 98.8004% ( 9) 00:09:45.933 11915.636 - 11975.215: 98.8596% ( 8) 00:09:45.933 11975.215 - 12034.793: 98.9040% ( 6) 00:09:45.933 12034.793 - 12094.371: 98.9559% ( 7) 00:09:45.933 12094.371 - 12153.949: 99.0003% ( 6) 00:09:45.933 12153.949 - 12213.527: 99.0447% ( 6) 00:09:45.933 12213.527 - 12273.105: 99.0521% ( 1) 00:09:45.933 25022.836 - 25141.993: 99.0743% ( 3) 00:09:45.933 25141.993 - 25261.149: 99.0892% ( 2) 00:09:45.933 25261.149 - 25380.305: 99.1114% ( 3) 00:09:45.933 25380.305 - 25499.462: 99.1336% ( 3) 00:09:45.933 25499.462 - 25618.618: 99.1558% ( 3) 00:09:45.933 25618.618 - 25737.775: 99.1780% ( 3) 00:09:45.933 25737.775 - 25856.931: 99.2076% ( 4) 00:09:45.933 25856.931 - 25976.087: 99.2299% ( 3) 00:09:45.933 25976.087 - 26095.244: 99.2521% ( 3) 00:09:45.933 26095.244 - 26214.400: 99.2743% ( 3) 00:09:45.933 26214.400 - 26333.556: 99.2965% ( 3) 00:09:45.933 26333.556 - 26452.713: 99.3187% ( 3) 00:09:45.933 26452.713 - 26571.869: 99.3409% ( 3) 00:09:45.933 26571.869 - 26691.025: 99.3706% ( 4) 00:09:45.933 26691.025 - 26810.182: 99.3928% ( 3) 00:09:45.933 26810.182 - 26929.338: 99.4150% ( 3) 00:09:45.933 26929.338 - 27048.495: 99.4372% ( 3) 00:09:45.933 27048.495 - 27167.651: 99.4594% ( 3) 00:09:45.933 27167.651 - 27286.807: 99.4816% ( 3) 00:09:45.933 27286.807 - 27405.964: 99.5039% ( 3) 00:09:45.933 27405.964 - 27525.120: 99.5261% ( 3) 00:09:45.933 32410.531 - 32648.844: 99.5409% ( 2) 00:09:45.933 32648.844 - 32887.156: 99.5853% ( 6) 00:09:45.933 32887.156 - 33125.469: 99.6371% ( 7) 00:09:45.933 33125.469 - 33363.782: 99.6816% ( 6) 00:09:45.933 33363.782 - 33602.095: 99.7260% ( 6) 00:09:45.933 33602.095 - 33840.407: 99.7704% ( 6) 00:09:45.933 33840.407 - 34078.720: 99.8223% ( 7) 00:09:45.933 34078.720 - 34317.033: 99.8667% ( 6) 00:09:45.933 34317.033 - 34555.345: 99.9111% ( 6) 00:09:45.933 34555.345 - 34793.658: 99.9482% ( 5) 00:09:45.933 34793.658 - 35031.971: 100.0000% ( 7) 00:09:45.933 00:09:45.933 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:45.933 ============================================================================== 00:09:45.933 Range in us Cumulative IO count 00:09:45.933 7923.898 - 7983.476: 0.1037% ( 14) 00:09:45.933 7983.476 - 8043.055: 0.5036% ( 54) 00:09:45.933 8043.055 - 8102.633: 1.3033% ( 108) 00:09:45.933 8102.633 - 8162.211: 2.3623% ( 143) 00:09:45.933 8162.211 - 8221.789: 3.8359% ( 199) 00:09:45.933 8221.789 - 8281.367: 5.7983% ( 265) 00:09:45.933 8281.367 - 8340.945: 7.9902% ( 296) 00:09:45.933 8340.945 - 8400.524: 10.7153% ( 368) 00:09:45.933 8400.524 - 8460.102: 13.7367% ( 408) 00:09:45.933 8460.102 - 8519.680: 16.9950% ( 440) 00:09:45.933 8519.680 - 8579.258: 20.5198% ( 476) 00:09:45.933 8579.258 - 8638.836: 24.0669% ( 479) 00:09:45.933 8638.836 - 8698.415: 27.8806% ( 515) 00:09:45.933 8698.415 - 8757.993: 31.6055% ( 503) 00:09:45.933 8757.993 - 8817.571: 35.3969% ( 512) 00:09:45.933 8817.571 - 8877.149: 39.2847% ( 525) 00:09:45.934 8877.149 - 8936.727: 43.0021% ( 502) 00:09:45.934 8936.727 - 8996.305: 46.3270% ( 449) 00:09:45.934 8996.305 - 9055.884: 49.4668% ( 424) 00:09:45.934 9055.884 - 9115.462: 52.2734% ( 379) 00:09:45.934 9115.462 - 9175.040: 54.8430% ( 347) 00:09:45.934 9175.040 - 9234.618: 57.3386% ( 337) 00:09:45.934 9234.618 - 9294.196: 59.8489% ( 339) 00:09:45.934 9294.196 - 9353.775: 62.1149% ( 306) 00:09:45.934 9353.775 - 9413.353: 64.3661% ( 304) 00:09:45.934 9413.353 - 9472.931: 66.3655% ( 270) 00:09:45.934 9472.931 - 9532.509: 68.3353% ( 266) 00:09:45.934 9532.509 - 9592.087: 70.2681% ( 261) 00:09:45.934 9592.087 - 9651.665: 72.0379% ( 239) 00:09:45.934 9651.665 - 9711.244: 73.7633% ( 233) 00:09:45.934 9711.244 - 9770.822: 75.5036% ( 235) 00:09:45.934 9770.822 - 9830.400: 77.1179% ( 218) 00:09:45.934 9830.400 - 9889.978: 78.6656% ( 209) 00:09:45.934 9889.978 - 9949.556: 80.2059% ( 208) 00:09:45.934 9949.556 - 10009.135: 81.7165% ( 204) 00:09:45.934 10009.135 - 10068.713: 83.1754% ( 197) 00:09:45.934 10068.713 - 10128.291: 84.6564% ( 200) 00:09:45.934 10128.291 - 10187.869: 86.1448% ( 201) 00:09:45.934 10187.869 - 10247.447: 87.6259% ( 200) 00:09:45.934 10247.447 - 10307.025: 88.9440% ( 178) 00:09:45.934 10307.025 - 10366.604: 90.2621% ( 178) 00:09:45.934 10366.604 - 10426.182: 91.5284% ( 171) 00:09:45.934 10426.182 - 10485.760: 92.6392% ( 150) 00:09:45.934 10485.760 - 10545.338: 93.6167% ( 132) 00:09:45.934 10545.338 - 10604.916: 94.4461% ( 112) 00:09:45.934 10604.916 - 10664.495: 95.0903% ( 87) 00:09:45.934 10664.495 - 10724.073: 95.6087% ( 70) 00:09:45.934 10724.073 - 10783.651: 96.0160% ( 55) 00:09:45.934 10783.651 - 10843.229: 96.2974% ( 38) 00:09:45.934 10843.229 - 10902.807: 96.5418% ( 33) 00:09:45.934 10902.807 - 10962.385: 96.7861% ( 33) 00:09:45.934 10962.385 - 11021.964: 96.9861% ( 27) 00:09:45.934 11021.964 - 11081.542: 97.1638% ( 24) 00:09:45.934 11081.542 - 11141.120: 97.3341% ( 23) 00:09:45.934 11141.120 - 11200.698: 97.5044% ( 23) 00:09:45.934 11200.698 - 11260.276: 97.6525% ( 20) 00:09:45.934 11260.276 - 11319.855: 97.7636% ( 15) 00:09:45.934 11319.855 - 11379.433: 97.8821% ( 16) 00:09:45.934 11379.433 - 11439.011: 98.0080% ( 17) 00:09:45.934 11439.011 - 11498.589: 98.0969% ( 12) 00:09:45.934 11498.589 - 11558.167: 98.2376% ( 19) 00:09:45.934 11558.167 - 11617.745: 98.3486% ( 15) 00:09:45.934 11617.745 - 11677.324: 98.4819% ( 18) 00:09:45.934 11677.324 - 11736.902: 98.5782% ( 13) 00:09:45.934 11736.902 - 11796.480: 98.6597% ( 11) 00:09:45.934 11796.480 - 11856.058: 98.7337% ( 10) 00:09:45.934 11856.058 - 11915.636: 98.8078% ( 10) 00:09:45.934 11915.636 - 11975.215: 98.8522% ( 6) 00:09:45.934 11975.215 - 12034.793: 98.9040% ( 7) 00:09:45.934 12034.793 - 12094.371: 98.9559% ( 7) 00:09:45.934 12094.371 - 12153.949: 99.0077% ( 7) 00:09:45.934 12153.949 - 12213.527: 99.0521% ( 6) 00:09:45.934 22520.553 - 22639.709: 99.0818% ( 4) 00:09:45.934 22639.709 - 22758.865: 99.1040% ( 3) 00:09:45.934 22758.865 - 22878.022: 99.1262% ( 3) 00:09:45.934 22878.022 - 22997.178: 99.1484% ( 3) 00:09:45.934 22997.178 - 23116.335: 99.1706% ( 3) 00:09:45.934 23116.335 - 23235.491: 99.2002% ( 4) 00:09:45.934 23235.491 - 23354.647: 99.2225% ( 3) 00:09:45.934 23354.647 - 23473.804: 99.2447% ( 3) 00:09:45.934 23473.804 - 23592.960: 99.2669% ( 3) 00:09:45.934 23592.960 - 23712.116: 99.2891% ( 3) 00:09:45.934 23712.116 - 23831.273: 99.3113% ( 3) 00:09:45.934 23831.273 - 23950.429: 99.3335% ( 3) 00:09:45.934 23950.429 - 24069.585: 99.3632% ( 4) 00:09:45.934 24069.585 - 24188.742: 99.3854% ( 3) 00:09:45.934 24188.742 - 24307.898: 99.4002% ( 2) 00:09:45.934 24307.898 - 24427.055: 99.4298% ( 4) 00:09:45.934 24427.055 - 24546.211: 99.4520% ( 3) 00:09:45.934 24546.211 - 24665.367: 99.4742% ( 3) 00:09:45.934 24665.367 - 24784.524: 99.4964% ( 3) 00:09:45.934 24784.524 - 24903.680: 99.5113% ( 2) 00:09:45.934 24903.680 - 25022.836: 99.5261% ( 2) 00:09:45.934 29908.247 - 30027.404: 99.5335% ( 1) 00:09:45.934 30027.404 - 30146.560: 99.5483% ( 2) 00:09:45.934 30146.560 - 30265.716: 99.5779% ( 4) 00:09:45.934 30265.716 - 30384.873: 99.5927% ( 2) 00:09:45.934 30384.873 - 30504.029: 99.6223% ( 4) 00:09:45.934 30504.029 - 30742.342: 99.6594% ( 5) 00:09:45.934 30742.342 - 30980.655: 99.7112% ( 7) 00:09:45.934 30980.655 - 31218.967: 99.7556% ( 6) 00:09:45.934 31218.967 - 31457.280: 99.7927% ( 5) 00:09:45.934 31457.280 - 31695.593: 99.8371% ( 6) 00:09:45.934 31695.593 - 31933.905: 99.8815% ( 6) 00:09:45.934 31933.905 - 32172.218: 99.9334% ( 7) 00:09:45.934 32172.218 - 32410.531: 99.9852% ( 7) 00:09:45.934 32410.531 - 32648.844: 100.0000% ( 2) 00:09:45.934 00:09:45.934 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:45.934 ============================================================================== 00:09:45.934 Range in us Cumulative IO count 00:09:45.934 7864.320 - 7923.898: 0.0296% ( 4) 00:09:45.934 7923.898 - 7983.476: 0.1407% ( 15) 00:09:45.934 7983.476 - 8043.055: 0.5702% ( 58) 00:09:45.934 8043.055 - 8102.633: 1.3033% ( 99) 00:09:45.934 8102.633 - 8162.211: 2.3919% ( 147) 00:09:45.934 8162.211 - 8221.789: 3.8803% ( 201) 00:09:45.934 8221.789 - 8281.367: 5.8575% ( 267) 00:09:45.934 8281.367 - 8340.945: 7.9902% ( 288) 00:09:45.934 8340.945 - 8400.524: 10.5969% ( 352) 00:09:45.934 8400.524 - 8460.102: 13.6478% ( 412) 00:09:45.934 8460.102 - 8519.680: 16.9135% ( 441) 00:09:45.934 8519.680 - 8579.258: 20.4310% ( 475) 00:09:45.934 8579.258 - 8638.836: 24.0299% ( 486) 00:09:45.934 8638.836 - 8698.415: 27.5918% ( 481) 00:09:45.934 8698.415 - 8757.993: 31.3907% ( 513) 00:09:45.934 8757.993 - 8817.571: 35.1600% ( 509) 00:09:45.934 8817.571 - 8877.149: 38.9884% ( 517) 00:09:45.934 8877.149 - 8936.727: 42.6836% ( 499) 00:09:45.934 8936.727 - 8996.305: 46.0752% ( 458) 00:09:45.934 8996.305 - 9055.884: 49.1928% ( 421) 00:09:45.934 9055.884 - 9115.462: 52.0809% ( 390) 00:09:45.934 9115.462 - 9175.040: 54.7393% ( 359) 00:09:45.934 9175.040 - 9234.618: 57.2793% ( 343) 00:09:45.934 9234.618 - 9294.196: 59.6786% ( 324) 00:09:45.934 9294.196 - 9353.775: 61.9298% ( 304) 00:09:45.934 9353.775 - 9413.353: 64.0403% ( 285) 00:09:45.934 9413.353 - 9472.931: 66.1508% ( 285) 00:09:45.934 9472.931 - 9532.509: 68.1724% ( 273) 00:09:45.934 9532.509 - 9592.087: 69.9867% ( 245) 00:09:45.934 9592.087 - 9651.665: 71.8232% ( 248) 00:09:45.934 9651.665 - 9711.244: 73.6226% ( 243) 00:09:45.934 9711.244 - 9770.822: 75.3703% ( 236) 00:09:45.934 9770.822 - 9830.400: 76.9328% ( 211) 00:09:45.934 9830.400 - 9889.978: 78.5545% ( 219) 00:09:45.934 9889.978 - 9949.556: 80.0800% ( 206) 00:09:45.934 9949.556 - 10009.135: 81.6573% ( 213) 00:09:45.934 10009.135 - 10068.713: 83.2198% ( 211) 00:09:45.934 10068.713 - 10128.291: 84.6786% ( 197) 00:09:45.934 10128.291 - 10187.869: 86.1448% ( 198) 00:09:45.934 10187.869 - 10247.447: 87.5815% ( 194) 00:09:45.934 10247.447 - 10307.025: 88.9292% ( 182) 00:09:45.934 10307.025 - 10366.604: 90.2473% ( 178) 00:09:45.934 10366.604 - 10426.182: 91.4692% ( 165) 00:09:45.934 10426.182 - 10485.760: 92.6318% ( 157) 00:09:45.934 10485.760 - 10545.338: 93.5945% ( 130) 00:09:45.934 10545.338 - 10604.916: 94.4831% ( 120) 00:09:45.934 10604.916 - 10664.495: 95.0829% ( 81) 00:09:45.934 10664.495 - 10724.073: 95.6161% ( 72) 00:09:45.934 10724.073 - 10783.651: 96.0308% ( 56) 00:09:45.934 10783.651 - 10843.229: 96.3863% ( 48) 00:09:45.934 10843.229 - 10902.807: 96.6677% ( 38) 00:09:45.934 10902.807 - 10962.385: 96.8972% ( 31) 00:09:45.934 10962.385 - 11021.964: 97.1120% ( 29) 00:09:45.934 11021.964 - 11081.542: 97.3119% ( 27) 00:09:45.934 11081.542 - 11141.120: 97.5193% ( 28) 00:09:45.934 11141.120 - 11200.698: 97.6970% ( 24) 00:09:45.934 11200.698 - 11260.276: 97.8451% ( 20) 00:09:45.934 11260.276 - 11319.855: 98.0080% ( 22) 00:09:45.934 11319.855 - 11379.433: 98.1339% ( 17) 00:09:45.934 11379.433 - 11439.011: 98.2598% ( 17) 00:09:45.934 11439.011 - 11498.589: 98.4005% ( 19) 00:09:45.934 11498.589 - 11558.167: 98.5116% ( 15) 00:09:45.934 11558.167 - 11617.745: 98.6152% ( 14) 00:09:45.934 11617.745 - 11677.324: 98.7115% ( 13) 00:09:45.934 11677.324 - 11736.902: 98.7707% ( 8) 00:09:45.934 11736.902 - 11796.480: 98.8448% ( 10) 00:09:45.934 11796.480 - 11856.058: 98.8892% ( 6) 00:09:45.934 11856.058 - 11915.636: 98.9411% ( 7) 00:09:45.934 11915.636 - 11975.215: 98.9633% ( 3) 00:09:45.934 11975.215 - 12034.793: 98.9929% ( 4) 00:09:45.934 12034.793 - 12094.371: 99.0225% ( 4) 00:09:45.934 12094.371 - 12153.949: 99.0447% ( 3) 00:09:45.934 12153.949 - 12213.527: 99.0521% ( 1) 00:09:45.934 19899.113 - 20018.269: 99.0669% ( 2) 00:09:45.934 20018.269 - 20137.425: 99.0892% ( 3) 00:09:45.934 20137.425 - 20256.582: 99.1114% ( 3) 00:09:45.934 20256.582 - 20375.738: 99.1336% ( 3) 00:09:45.934 20375.738 - 20494.895: 99.1558% ( 3) 00:09:45.934 20494.895 - 20614.051: 99.1780% ( 3) 00:09:45.934 20614.051 - 20733.207: 99.2002% ( 3) 00:09:45.934 20733.207 - 20852.364: 99.2299% ( 4) 00:09:45.934 20852.364 - 20971.520: 99.2521% ( 3) 00:09:45.934 20971.520 - 21090.676: 99.2743% ( 3) 00:09:45.934 21090.676 - 21209.833: 99.2965% ( 3) 00:09:45.935 21209.833 - 21328.989: 99.3187% ( 3) 00:09:45.935 21328.989 - 21448.145: 99.3409% ( 3) 00:09:45.935 21448.145 - 21567.302: 99.3632% ( 3) 00:09:45.935 21567.302 - 21686.458: 99.3854% ( 3) 00:09:45.935 21686.458 - 21805.615: 99.4076% ( 3) 00:09:45.935 21805.615 - 21924.771: 99.4372% ( 4) 00:09:45.935 21924.771 - 22043.927: 99.4594% ( 3) 00:09:45.935 22043.927 - 22163.084: 99.4816% ( 3) 00:09:45.935 22163.084 - 22282.240: 99.5039% ( 3) 00:09:45.935 22282.240 - 22401.396: 99.5261% ( 3) 00:09:45.935 27286.807 - 27405.964: 99.5335% ( 1) 00:09:45.935 27405.964 - 27525.120: 99.5557% ( 3) 00:09:45.935 27525.120 - 27644.276: 99.5779% ( 3) 00:09:45.935 27644.276 - 27763.433: 99.6001% ( 3) 00:09:45.935 27763.433 - 27882.589: 99.6297% ( 4) 00:09:45.935 27882.589 - 28001.745: 99.6520% ( 3) 00:09:45.935 28001.745 - 28120.902: 99.6742% ( 3) 00:09:45.935 28120.902 - 28240.058: 99.7038% ( 4) 00:09:45.935 28240.058 - 28359.215: 99.7260% ( 3) 00:09:45.935 28359.215 - 28478.371: 99.7482% ( 3) 00:09:45.935 28478.371 - 28597.527: 99.7704% ( 3) 00:09:45.935 28597.527 - 28716.684: 99.7927% ( 3) 00:09:45.935 28716.684 - 28835.840: 99.8149% ( 3) 00:09:45.935 28835.840 - 28954.996: 99.8445% ( 4) 00:09:45.935 28954.996 - 29074.153: 99.8667% ( 3) 00:09:45.935 29074.153 - 29193.309: 99.8889% ( 3) 00:09:45.935 29193.309 - 29312.465: 99.9185% ( 4) 00:09:45.935 29312.465 - 29431.622: 99.9482% ( 4) 00:09:45.935 29431.622 - 29550.778: 99.9704% ( 3) 00:09:45.935 29550.778 - 29669.935: 100.0000% ( 4) 00:09:45.935 00:09:45.935 04:59:00 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:47.316 Initializing NVMe Controllers 00:09:47.316 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.316 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.316 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.316 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.316 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:47.316 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:47.316 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:47.316 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:47.316 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:47.316 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:47.316 Initialization complete. Launching workers. 00:09:47.316 ======================================================== 00:09:47.316 Latency(us) 00:09:47.316 Device Information : IOPS MiB/s Average min max 00:09:47.316 PCIE (0000:00:10.0) NSID 1 from core 0: 11706.97 137.19 10958.67 8329.74 40842.76 00:09:47.316 PCIE (0000:00:11.0) NSID 1 from core 0: 11706.97 137.19 10938.72 8484.51 38663.68 00:09:47.316 PCIE (0000:00:13.0) NSID 1 from core 0: 11706.97 137.19 10919.93 8502.04 37221.22 00:09:47.316 PCIE (0000:00:12.0) NSID 1 from core 0: 11706.97 137.19 10903.72 8581.48 35811.93 00:09:47.316 PCIE (0000:00:12.0) NSID 2 from core 0: 11706.97 137.19 10888.43 8409.59 34134.90 00:09:47.316 PCIE (0000:00:12.0) NSID 3 from core 0: 11706.97 137.19 10873.01 8485.55 32377.11 00:09:47.316 ======================================================== 00:09:47.316 Total : 70241.80 823.15 10913.75 8329.74 40842.76 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8638.836us 00:09:47.316 10.00000% : 9294.196us 00:09:47.316 25.00000% : 9830.400us 00:09:47.316 50.00000% : 10545.338us 00:09:47.316 75.00000% : 11439.011us 00:09:47.316 90.00000% : 12511.418us 00:09:47.316 95.00000% : 13107.200us 00:09:47.316 98.00000% : 14537.076us 00:09:47.316 99.00000% : 30027.404us 00:09:47.316 99.50000% : 39083.287us 00:09:47.316 99.90000% : 40513.164us 00:09:47.316 99.99000% : 40989.789us 00:09:47.316 99.99900% : 40989.789us 00:09:47.316 99.99990% : 40989.789us 00:09:47.316 99.99999% : 40989.789us 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8817.571us 00:09:47.316 10.00000% : 9294.196us 00:09:47.316 25.00000% : 9889.978us 00:09:47.316 50.00000% : 10545.338us 00:09:47.316 75.00000% : 11439.011us 00:09:47.316 90.00000% : 12392.262us 00:09:47.316 95.00000% : 12988.044us 00:09:47.316 98.00000% : 15073.280us 00:09:47.316 99.00000% : 29431.622us 00:09:47.316 99.50000% : 36938.473us 00:09:47.316 99.90000% : 38368.349us 00:09:47.316 99.99000% : 38844.975us 00:09:47.316 99.99900% : 38844.975us 00:09:47.316 99.99990% : 38844.975us 00:09:47.316 99.99999% : 38844.975us 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8817.571us 00:09:47.316 10.00000% : 9294.196us 00:09:47.316 25.00000% : 9889.978us 00:09:47.316 50.00000% : 10545.338us 00:09:47.316 75.00000% : 11379.433us 00:09:47.316 90.00000% : 12451.840us 00:09:47.316 95.00000% : 13047.622us 00:09:47.316 98.00000% : 15371.171us 00:09:47.316 99.00000% : 28478.371us 00:09:47.316 99.50000% : 35508.596us 00:09:47.316 99.90000% : 36938.473us 00:09:47.316 99.99000% : 37415.098us 00:09:47.316 99.99900% : 37415.098us 00:09:47.316 99.99990% : 37415.098us 00:09:47.316 99.99999% : 37415.098us 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8877.149us 00:09:47.316 10.00000% : 9294.196us 00:09:47.316 25.00000% : 9889.978us 00:09:47.316 50.00000% : 10545.338us 00:09:47.316 75.00000% : 11379.433us 00:09:47.316 90.00000% : 12451.840us 00:09:47.316 95.00000% : 12988.044us 00:09:47.316 98.00000% : 15371.171us 00:09:47.316 99.00000% : 26810.182us 00:09:47.316 99.50000% : 34078.720us 00:09:47.316 99.90000% : 35508.596us 00:09:47.316 99.99000% : 35985.222us 00:09:47.316 99.99900% : 35985.222us 00:09:47.316 99.99990% : 35985.222us 00:09:47.316 99.99999% : 35985.222us 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8817.571us 00:09:47.316 10.00000% : 9353.775us 00:09:47.316 25.00000% : 9889.978us 00:09:47.316 50.00000% : 10485.760us 00:09:47.316 75.00000% : 11379.433us 00:09:47.316 90.00000% : 12451.840us 00:09:47.316 95.00000% : 13107.200us 00:09:47.316 98.00000% : 15371.171us 00:09:47.316 99.00000% : 25261.149us 00:09:47.316 99.50000% : 32410.531us 00:09:47.316 99.90000% : 33840.407us 00:09:47.316 99.99000% : 34317.033us 00:09:47.316 99.99900% : 34317.033us 00:09:47.316 99.99990% : 34317.033us 00:09:47.316 99.99999% : 34317.033us 00:09:47.316 00:09:47.316 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:47.316 ================================================================================= 00:09:47.316 1.00000% : 8817.571us 00:09:47.316 10.00000% : 9353.775us 00:09:47.316 25.00000% : 9889.978us 00:09:47.316 50.00000% : 10545.338us 00:09:47.316 75.00000% : 11439.011us 00:09:47.316 90.00000% : 12392.262us 00:09:47.316 95.00000% : 13047.622us 00:09:47.316 98.00000% : 14656.233us 00:09:47.316 99.00000% : 23831.273us 00:09:47.316 99.50000% : 30980.655us 00:09:47.316 99.90000% : 32172.218us 00:09:47.316 99.99000% : 32410.531us 00:09:47.316 99.99900% : 32410.531us 00:09:47.316 99.99990% : 32410.531us 00:09:47.316 99.99999% : 32410.531us 00:09:47.316 00:09:47.316 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:47.316 ============================================================================== 00:09:47.316 Range in us Cumulative IO count 00:09:47.316 8281.367 - 8340.945: 0.0171% ( 2) 00:09:47.316 8400.524 - 8460.102: 0.0342% ( 2) 00:09:47.316 8460.102 - 8519.680: 0.2561% ( 26) 00:09:47.316 8519.680 - 8579.258: 0.6660% ( 48) 00:09:47.316 8579.258 - 8638.836: 1.2295% ( 66) 00:09:47.316 8638.836 - 8698.415: 1.9211% ( 81) 00:09:47.316 8698.415 - 8757.993: 2.7066% ( 92) 00:09:47.316 8757.993 - 8817.571: 3.6031% ( 105) 00:09:47.316 8817.571 - 8877.149: 4.4228% ( 96) 00:09:47.316 8877.149 - 8936.727: 5.1144% ( 81) 00:09:47.316 8936.727 - 8996.305: 5.8487% ( 86) 00:09:47.316 8996.305 - 9055.884: 6.6342% ( 92) 00:09:47.316 9055.884 - 9115.462: 7.4624% ( 97) 00:09:47.316 9115.462 - 9175.040: 8.6322% ( 137) 00:09:47.316 9175.040 - 9234.618: 9.7080% ( 126) 00:09:47.316 9234.618 - 9294.196: 10.7753% ( 125) 00:09:47.316 9294.196 - 9353.775: 11.7145% ( 110) 00:09:47.316 9353.775 - 9413.353: 12.7732% ( 124) 00:09:47.316 9413.353 - 9472.931: 13.9515% ( 138) 00:09:47.316 9472.931 - 9532.509: 15.4115% ( 171) 00:09:47.316 9532.509 - 9592.087: 17.2985% ( 221) 00:09:47.316 9592.087 - 9651.665: 19.0659% ( 207) 00:09:47.316 9651.665 - 9711.244: 20.9102% ( 216) 00:09:47.316 9711.244 - 9770.822: 22.8398% ( 226) 00:09:47.317 9770.822 - 9830.400: 25.0854% ( 263) 00:09:47.317 9830.400 - 9889.978: 27.3566% ( 266) 00:09:47.317 9889.978 - 9949.556: 29.3033% ( 228) 00:09:47.317 9949.556 - 10009.135: 31.4635% ( 253) 00:09:47.317 10009.135 - 10068.713: 33.6151% ( 252) 00:09:47.317 10068.713 - 10128.291: 35.5704% ( 229) 00:09:47.317 10128.291 - 10187.869: 37.8586% ( 268) 00:09:47.317 10187.869 - 10247.447: 39.9163% ( 241) 00:09:47.317 10247.447 - 10307.025: 42.1192% ( 258) 00:09:47.317 10307.025 - 10366.604: 44.2025% ( 244) 00:09:47.317 10366.604 - 10426.182: 46.4225% ( 260) 00:09:47.317 10426.182 - 10485.760: 48.4204% ( 234) 00:09:47.317 10485.760 - 10545.338: 50.4355% ( 236) 00:09:47.317 10545.338 - 10604.916: 52.6895% ( 264) 00:09:47.317 10604.916 - 10664.495: 54.9010% ( 259) 00:09:47.317 10664.495 - 10724.073: 57.0184% ( 248) 00:09:47.317 10724.073 - 10783.651: 59.1615% ( 251) 00:09:47.317 10783.651 - 10843.229: 61.0997% ( 227) 00:09:47.317 10843.229 - 10902.807: 62.8330% ( 203) 00:09:47.317 10902.807 - 10962.385: 64.4296% ( 187) 00:09:47.317 10962.385 - 11021.964: 66.1800% ( 205) 00:09:47.317 11021.964 - 11081.542: 67.6315% ( 170) 00:09:47.317 11081.542 - 11141.120: 69.0915% ( 171) 00:09:47.317 11141.120 - 11200.698: 70.3808% ( 151) 00:09:47.317 11200.698 - 11260.276: 71.6359% ( 147) 00:09:47.317 11260.276 - 11319.855: 72.8996% ( 148) 00:09:47.317 11319.855 - 11379.433: 74.0608% ( 136) 00:09:47.317 11379.433 - 11439.011: 75.2476% ( 139) 00:09:47.317 11439.011 - 11498.589: 76.2551% ( 118) 00:09:47.317 11498.589 - 11558.167: 77.3566% ( 129) 00:09:47.317 11558.167 - 11617.745: 78.4665% ( 130) 00:09:47.317 11617.745 - 11677.324: 79.6107% ( 134) 00:09:47.317 11677.324 - 11736.902: 80.4474% ( 98) 00:09:47.317 11736.902 - 11796.480: 81.3354% ( 104) 00:09:47.317 11796.480 - 11856.058: 82.1380% ( 94) 00:09:47.317 11856.058 - 11915.636: 83.0089% ( 102) 00:09:47.317 11915.636 - 11975.215: 83.7859% ( 91) 00:09:47.317 11975.215 - 12034.793: 84.5031% ( 84) 00:09:47.317 12034.793 - 12094.371: 85.2715% ( 90) 00:09:47.317 12094.371 - 12153.949: 86.1253% ( 100) 00:09:47.317 12153.949 - 12213.527: 86.9365% ( 95) 00:09:47.317 12213.527 - 12273.105: 87.7476% ( 95) 00:09:47.317 12273.105 - 12332.684: 88.4904% ( 87) 00:09:47.317 12332.684 - 12392.262: 89.2760% ( 92) 00:09:47.317 12392.262 - 12451.840: 89.9846% ( 83) 00:09:47.317 12451.840 - 12511.418: 90.7104% ( 85) 00:09:47.317 12511.418 - 12570.996: 91.2995% ( 69) 00:09:47.317 12570.996 - 12630.575: 91.8374% ( 63) 00:09:47.317 12630.575 - 12690.153: 92.4010% ( 66) 00:09:47.317 12690.153 - 12749.731: 92.9645% ( 66) 00:09:47.317 12749.731 - 12809.309: 93.4512% ( 57) 00:09:47.317 12809.309 - 12868.887: 93.8952% ( 52) 00:09:47.317 12868.887 - 12928.465: 94.1940% ( 35) 00:09:47.317 12928.465 - 12988.044: 94.5270% ( 39) 00:09:47.317 12988.044 - 13047.622: 94.8002% ( 32) 00:09:47.317 13047.622 - 13107.200: 95.1673% ( 43) 00:09:47.317 13107.200 - 13166.778: 95.5089% ( 40) 00:09:47.317 13166.778 - 13226.356: 95.7480% ( 28) 00:09:47.317 13226.356 - 13285.935: 95.9614% ( 25) 00:09:47.317 13285.935 - 13345.513: 96.2005% ( 28) 00:09:47.317 13345.513 - 13405.091: 96.3883% ( 22) 00:09:47.317 13405.091 - 13464.669: 96.5591% ( 20) 00:09:47.317 13464.669 - 13524.247: 96.7128% ( 18) 00:09:47.317 13524.247 - 13583.825: 96.8579% ( 17) 00:09:47.317 13583.825 - 13643.404: 96.9775% ( 14) 00:09:47.317 13643.404 - 13702.982: 97.0970% ( 14) 00:09:47.317 13702.982 - 13762.560: 97.1738% ( 9) 00:09:47.317 13762.560 - 13822.138: 97.2848% ( 13) 00:09:47.317 13822.138 - 13881.716: 97.3531% ( 8) 00:09:47.317 13881.716 - 13941.295: 97.4129% ( 7) 00:09:47.317 13941.295 - 14000.873: 97.4983% ( 10) 00:09:47.317 14000.873 - 14060.451: 97.5581% ( 7) 00:09:47.317 14060.451 - 14120.029: 97.6520% ( 11) 00:09:47.317 14120.029 - 14179.607: 97.7117% ( 7) 00:09:47.317 14179.607 - 14239.185: 97.7715% ( 7) 00:09:47.317 14239.185 - 14298.764: 97.8142% ( 5) 00:09:47.317 14298.764 - 14358.342: 97.8740% ( 7) 00:09:47.317 14358.342 - 14417.920: 97.9252% ( 6) 00:09:47.317 14417.920 - 14477.498: 97.9935% ( 8) 00:09:47.317 14477.498 - 14537.076: 98.0277% ( 4) 00:09:47.317 14537.076 - 14596.655: 98.0533% ( 3) 00:09:47.317 14596.655 - 14656.233: 98.0789% ( 3) 00:09:47.317 14656.233 - 14715.811: 98.0960% ( 2) 00:09:47.317 14715.811 - 14775.389: 98.1216% ( 3) 00:09:47.317 14775.389 - 14834.967: 98.1301% ( 1) 00:09:47.317 14834.967 - 14894.545: 98.1557% ( 3) 00:09:47.317 14894.545 - 14954.124: 98.1728% ( 2) 00:09:47.317 14954.124 - 15013.702: 98.1984% ( 3) 00:09:47.317 15013.702 - 15073.280: 98.2155% ( 2) 00:09:47.317 15073.280 - 15132.858: 98.2326% ( 2) 00:09:47.317 15132.858 - 15192.436: 98.2497% ( 2) 00:09:47.317 15192.436 - 15252.015: 98.2753% ( 3) 00:09:47.317 15252.015 - 15371.171: 98.3094% ( 4) 00:09:47.317 15371.171 - 15490.327: 98.4290% ( 14) 00:09:47.317 15490.327 - 15609.484: 98.5143% ( 10) 00:09:47.317 15609.484 - 15728.640: 98.5485% ( 4) 00:09:47.317 15728.640 - 15847.796: 98.5827% ( 4) 00:09:47.317 15847.796 - 15966.953: 98.6168% ( 4) 00:09:47.317 15966.953 - 16086.109: 98.6680% ( 6) 00:09:47.317 16086.109 - 16205.265: 98.7193% ( 6) 00:09:47.317 16205.265 - 16324.422: 98.7705% ( 6) 00:09:47.317 16324.422 - 16443.578: 98.8303% ( 7) 00:09:47.317 16443.578 - 16562.735: 98.8730% ( 5) 00:09:47.317 16562.735 - 16681.891: 98.9071% ( 4) 00:09:47.317 29431.622 - 29550.778: 98.9156% ( 1) 00:09:47.317 29550.778 - 29669.935: 98.9413% ( 3) 00:09:47.317 29669.935 - 29789.091: 98.9669% ( 3) 00:09:47.317 29789.091 - 29908.247: 98.9925% ( 3) 00:09:47.317 29908.247 - 30027.404: 99.0266% ( 4) 00:09:47.317 30027.404 - 30146.560: 99.0523% ( 3) 00:09:47.317 30146.560 - 30265.716: 99.0779% ( 3) 00:09:47.317 30265.716 - 30384.873: 99.1120% ( 4) 00:09:47.317 30384.873 - 30504.029: 99.1376% ( 3) 00:09:47.317 30504.029 - 30742.342: 99.1803% ( 5) 00:09:47.317 30742.342 - 30980.655: 99.2401% ( 7) 00:09:47.317 30980.655 - 31218.967: 99.2913% ( 6) 00:09:47.317 31218.967 - 31457.280: 99.3511% ( 7) 00:09:47.317 31457.280 - 31695.593: 99.4023% ( 6) 00:09:47.317 31695.593 - 31933.905: 99.4450% ( 5) 00:09:47.317 31933.905 - 32172.218: 99.4536% ( 1) 00:09:47.317 38606.662 - 38844.975: 99.4962% ( 5) 00:09:47.317 38844.975 - 39083.287: 99.5475% ( 6) 00:09:47.317 39083.287 - 39321.600: 99.6072% ( 7) 00:09:47.317 39321.600 - 39559.913: 99.6585% ( 6) 00:09:47.317 39559.913 - 39798.225: 99.7268% ( 8) 00:09:47.317 39798.225 - 40036.538: 99.7865% ( 7) 00:09:47.317 40036.538 - 40274.851: 99.8548% ( 8) 00:09:47.317 40274.851 - 40513.164: 99.9061% ( 6) 00:09:47.317 40513.164 - 40751.476: 99.9744% ( 8) 00:09:47.317 40751.476 - 40989.789: 100.0000% ( 3) 00:09:47.317 00:09:47.317 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:47.317 ============================================================================== 00:09:47.317 Range in us Cumulative IO count 00:09:47.317 8460.102 - 8519.680: 0.0085% ( 1) 00:09:47.317 8579.258 - 8638.836: 0.0683% ( 7) 00:09:47.317 8638.836 - 8698.415: 0.2818% ( 25) 00:09:47.317 8698.415 - 8757.993: 0.5977% ( 37) 00:09:47.317 8757.993 - 8817.571: 1.2807% ( 80) 00:09:47.317 8817.571 - 8877.149: 2.1175% ( 98) 00:09:47.317 8877.149 - 8936.727: 3.1079% ( 116) 00:09:47.317 8936.727 - 8996.305: 4.3033% ( 140) 00:09:47.317 8996.305 - 9055.884: 5.8145% ( 177) 00:09:47.317 9055.884 - 9115.462: 7.0099% ( 140) 00:09:47.317 9115.462 - 9175.040: 8.2650% ( 147) 00:09:47.317 9175.040 - 9234.618: 9.5202% ( 147) 00:09:47.317 9234.618 - 9294.196: 10.7240% ( 141) 00:09:47.317 9294.196 - 9353.775: 11.9621% ( 145) 00:09:47.317 9353.775 - 9413.353: 13.2514% ( 151) 00:09:47.317 9413.353 - 9472.931: 14.4723% ( 143) 00:09:47.317 9472.931 - 9532.509: 15.7531% ( 150) 00:09:47.317 9532.509 - 9592.087: 17.1277% ( 161) 00:09:47.317 9592.087 - 9651.665: 18.3914% ( 148) 00:09:47.317 9651.665 - 9711.244: 19.7319% ( 157) 00:09:47.317 9711.244 - 9770.822: 21.2432% ( 177) 00:09:47.317 9770.822 - 9830.400: 23.1130% ( 219) 00:09:47.317 9830.400 - 9889.978: 25.2305% ( 248) 00:09:47.317 9889.978 - 9949.556: 27.2541% ( 237) 00:09:47.317 9949.556 - 10009.135: 29.3374% ( 244) 00:09:47.317 10009.135 - 10068.713: 31.5232% ( 256) 00:09:47.317 10068.713 - 10128.291: 33.8969% ( 278) 00:09:47.317 10128.291 - 10187.869: 36.5523% ( 311) 00:09:47.317 10187.869 - 10247.447: 39.0625% ( 294) 00:09:47.317 10247.447 - 10307.025: 41.7520% ( 315) 00:09:47.317 10307.025 - 10366.604: 44.2111% ( 288) 00:09:47.317 10366.604 - 10426.182: 46.7725% ( 300) 00:09:47.317 10426.182 - 10485.760: 49.2657% ( 292) 00:09:47.317 10485.760 - 10545.338: 51.8016% ( 297) 00:09:47.317 10545.338 - 10604.916: 54.4826% ( 314) 00:09:47.317 10604.916 - 10664.495: 57.0099% ( 296) 00:09:47.317 10664.495 - 10724.073: 59.4518% ( 286) 00:09:47.317 10724.073 - 10783.651: 61.6035% ( 252) 00:09:47.317 10783.651 - 10843.229: 63.5417% ( 227) 00:09:47.317 10843.229 - 10902.807: 65.2237% ( 197) 00:09:47.317 10902.807 - 10962.385: 66.8289% ( 188) 00:09:47.317 10962.385 - 11021.964: 68.3231% ( 175) 00:09:47.317 11021.964 - 11081.542: 69.6465% ( 155) 00:09:47.317 11081.542 - 11141.120: 70.8333% ( 139) 00:09:47.317 11141.120 - 11200.698: 71.7640% ( 109) 00:09:47.317 11200.698 - 11260.276: 72.6093% ( 99) 00:09:47.317 11260.276 - 11319.855: 73.4631% ( 100) 00:09:47.317 11319.855 - 11379.433: 74.2999% ( 98) 00:09:47.317 11379.433 - 11439.011: 75.1793% ( 103) 00:09:47.317 11439.011 - 11498.589: 76.0929% ( 107) 00:09:47.317 11498.589 - 11558.167: 77.0919% ( 117) 00:09:47.317 11558.167 - 11617.745: 78.0738% ( 115) 00:09:47.318 11617.745 - 11677.324: 79.0301% ( 112) 00:09:47.318 11677.324 - 11736.902: 80.0034% ( 114) 00:09:47.318 11736.902 - 11796.480: 81.0195% ( 119) 00:09:47.318 11796.480 - 11856.058: 81.8904% ( 102) 00:09:47.318 11856.058 - 11915.636: 82.9235% ( 121) 00:09:47.318 11915.636 - 11975.215: 83.8712% ( 111) 00:09:47.318 11975.215 - 12034.793: 84.6909% ( 96) 00:09:47.318 12034.793 - 12094.371: 85.6301% ( 110) 00:09:47.318 12094.371 - 12153.949: 86.6035% ( 114) 00:09:47.318 12153.949 - 12213.527: 87.5768% ( 114) 00:09:47.318 12213.527 - 12273.105: 88.5246% ( 111) 00:09:47.318 12273.105 - 12332.684: 89.4296% ( 106) 00:09:47.318 12332.684 - 12392.262: 90.3091% ( 103) 00:09:47.318 12392.262 - 12451.840: 91.0861% ( 91) 00:09:47.318 12451.840 - 12511.418: 91.7179% ( 74) 00:09:47.318 12511.418 - 12570.996: 92.3070% ( 69) 00:09:47.318 12570.996 - 12630.575: 92.9730% ( 78) 00:09:47.318 12630.575 - 12690.153: 93.5280% ( 65) 00:09:47.318 12690.153 - 12749.731: 93.9293% ( 47) 00:09:47.318 12749.731 - 12809.309: 94.3050% ( 44) 00:09:47.318 12809.309 - 12868.887: 94.6465% ( 40) 00:09:47.318 12868.887 - 12928.465: 94.9624% ( 37) 00:09:47.318 12928.465 - 12988.044: 95.3040% ( 40) 00:09:47.318 12988.044 - 13047.622: 95.6455% ( 40) 00:09:47.318 13047.622 - 13107.200: 95.9187% ( 32) 00:09:47.318 13107.200 - 13166.778: 96.1578% ( 28) 00:09:47.318 13166.778 - 13226.356: 96.3712% ( 25) 00:09:47.318 13226.356 - 13285.935: 96.5335% ( 19) 00:09:47.318 13285.935 - 13345.513: 96.6786% ( 17) 00:09:47.318 13345.513 - 13405.091: 96.8579% ( 21) 00:09:47.318 13405.091 - 13464.669: 96.9860% ( 15) 00:09:47.318 13464.669 - 13524.247: 97.1055% ( 14) 00:09:47.318 13524.247 - 13583.825: 97.1909% ( 10) 00:09:47.318 13583.825 - 13643.404: 97.2678% ( 9) 00:09:47.318 13643.404 - 13702.982: 97.3446% ( 9) 00:09:47.318 13702.982 - 13762.560: 97.4214% ( 9) 00:09:47.318 13762.560 - 13822.138: 97.4812% ( 7) 00:09:47.318 13822.138 - 13881.716: 97.5324% ( 6) 00:09:47.318 13881.716 - 13941.295: 97.5751% ( 5) 00:09:47.318 13941.295 - 14000.873: 97.6178% ( 5) 00:09:47.318 14000.873 - 14060.451: 97.6434% ( 3) 00:09:47.318 14060.451 - 14120.029: 97.6605% ( 2) 00:09:47.318 14120.029 - 14179.607: 97.6861% ( 3) 00:09:47.318 14179.607 - 14239.185: 97.7117% ( 3) 00:09:47.318 14239.185 - 14298.764: 97.7374% ( 3) 00:09:47.318 14298.764 - 14358.342: 97.7630% ( 3) 00:09:47.318 14358.342 - 14417.920: 97.7886% ( 3) 00:09:47.318 14417.920 - 14477.498: 97.8142% ( 3) 00:09:47.318 14656.233 - 14715.811: 97.8313% ( 2) 00:09:47.318 14715.811 - 14775.389: 97.8740% ( 5) 00:09:47.318 14775.389 - 14834.967: 97.9081% ( 4) 00:09:47.318 14834.967 - 14894.545: 97.9423% ( 4) 00:09:47.318 14894.545 - 14954.124: 97.9679% ( 3) 00:09:47.318 14954.124 - 15013.702: 97.9850% ( 2) 00:09:47.318 15013.702 - 15073.280: 98.0020% ( 2) 00:09:47.318 15073.280 - 15132.858: 98.0191% ( 2) 00:09:47.318 15132.858 - 15192.436: 98.0362% ( 2) 00:09:47.318 15192.436 - 15252.015: 98.0533% ( 2) 00:09:47.318 15252.015 - 15371.171: 98.1301% ( 9) 00:09:47.318 15371.171 - 15490.327: 98.2326% ( 12) 00:09:47.318 15490.327 - 15609.484: 98.3180% ( 10) 00:09:47.318 15609.484 - 15728.640: 98.4204% ( 12) 00:09:47.318 15728.640 - 15847.796: 98.4973% ( 9) 00:09:47.318 15847.796 - 15966.953: 98.5912% ( 11) 00:09:47.318 15966.953 - 16086.109: 98.7022% ( 13) 00:09:47.318 16086.109 - 16205.265: 98.7620% ( 7) 00:09:47.318 16205.265 - 16324.422: 98.8132% ( 6) 00:09:47.318 16324.422 - 16443.578: 98.8730% ( 7) 00:09:47.318 16443.578 - 16562.735: 98.9071% ( 4) 00:09:47.318 28954.996 - 29074.153: 98.9242% ( 2) 00:09:47.318 29074.153 - 29193.309: 98.9498% ( 3) 00:09:47.318 29193.309 - 29312.465: 98.9754% ( 3) 00:09:47.318 29312.465 - 29431.622: 99.0010% ( 3) 00:09:47.318 29431.622 - 29550.778: 99.0266% ( 3) 00:09:47.318 29550.778 - 29669.935: 99.0608% ( 4) 00:09:47.318 29669.935 - 29789.091: 99.1035% ( 5) 00:09:47.318 29789.091 - 29908.247: 99.1206% ( 2) 00:09:47.318 29908.247 - 30027.404: 99.1547% ( 4) 00:09:47.318 30027.404 - 30146.560: 99.1803% ( 3) 00:09:47.318 30146.560 - 30265.716: 99.2145% ( 4) 00:09:47.318 30265.716 - 30384.873: 99.2486% ( 4) 00:09:47.318 30384.873 - 30504.029: 99.2742% ( 3) 00:09:47.318 30504.029 - 30742.342: 99.3340% ( 7) 00:09:47.318 30742.342 - 30980.655: 99.4023% ( 8) 00:09:47.318 30980.655 - 31218.967: 99.4536% ( 6) 00:09:47.318 36461.847 - 36700.160: 99.4621% ( 1) 00:09:47.318 36700.160 - 36938.473: 99.5389% ( 9) 00:09:47.318 36938.473 - 37176.785: 99.5902% ( 6) 00:09:47.318 37176.785 - 37415.098: 99.6585% ( 8) 00:09:47.318 37415.098 - 37653.411: 99.7268% ( 8) 00:09:47.318 37653.411 - 37891.724: 99.7865% ( 7) 00:09:47.318 37891.724 - 38130.036: 99.8463% ( 7) 00:09:47.318 38130.036 - 38368.349: 99.9061% ( 7) 00:09:47.318 38368.349 - 38606.662: 99.9744% ( 8) 00:09:47.318 38606.662 - 38844.975: 100.0000% ( 3) 00:09:47.318 00:09:47.318 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:47.318 ============================================================================== 00:09:47.318 Range in us Cumulative IO count 00:09:47.318 8460.102 - 8519.680: 0.0085% ( 1) 00:09:47.318 8519.680 - 8579.258: 0.0256% ( 2) 00:09:47.318 8579.258 - 8638.836: 0.1195% ( 11) 00:09:47.318 8638.836 - 8698.415: 0.3586% ( 28) 00:09:47.318 8698.415 - 8757.993: 0.7258% ( 43) 00:09:47.318 8757.993 - 8817.571: 1.2807% ( 65) 00:09:47.318 8817.571 - 8877.149: 2.0748% ( 93) 00:09:47.318 8877.149 - 8936.727: 3.0311% ( 112) 00:09:47.318 8936.727 - 8996.305: 3.8678% ( 98) 00:09:47.318 8996.305 - 9055.884: 4.9010% ( 121) 00:09:47.318 9055.884 - 9115.462: 6.0024% ( 129) 00:09:47.318 9115.462 - 9175.040: 7.2831% ( 150) 00:09:47.318 9175.040 - 9234.618: 8.6578% ( 161) 00:09:47.318 9234.618 - 9294.196: 10.1093% ( 170) 00:09:47.318 9294.196 - 9353.775: 11.6291% ( 178) 00:09:47.318 9353.775 - 9413.353: 13.3026% ( 196) 00:09:47.318 9413.353 - 9472.931: 14.9590% ( 194) 00:09:47.318 9472.931 - 9532.509: 16.2483% ( 151) 00:09:47.318 9532.509 - 9592.087: 17.5290% ( 150) 00:09:47.318 9592.087 - 9651.665: 18.8098% ( 150) 00:09:47.318 9651.665 - 9711.244: 20.5003% ( 198) 00:09:47.318 9711.244 - 9770.822: 22.2336% ( 203) 00:09:47.318 9770.822 - 9830.400: 24.0693% ( 215) 00:09:47.318 9830.400 - 9889.978: 26.0587% ( 233) 00:09:47.318 9889.978 - 9949.556: 28.0396% ( 232) 00:09:47.318 9949.556 - 10009.135: 30.2681% ( 261) 00:09:47.318 10009.135 - 10068.713: 32.3600% ( 245) 00:09:47.318 10068.713 - 10128.291: 34.6653% ( 270) 00:09:47.318 10128.291 - 10187.869: 36.9450% ( 267) 00:09:47.318 10187.869 - 10247.447: 39.0967% ( 252) 00:09:47.318 10247.447 - 10307.025: 41.4191% ( 272) 00:09:47.318 10307.025 - 10366.604: 43.8183% ( 281) 00:09:47.318 10366.604 - 10426.182: 46.2176% ( 281) 00:09:47.318 10426.182 - 10485.760: 49.1376% ( 342) 00:09:47.318 10485.760 - 10545.338: 52.0150% ( 337) 00:09:47.318 10545.338 - 10604.916: 54.8412% ( 331) 00:09:47.318 10604.916 - 10664.495: 57.5478% ( 317) 00:09:47.318 10664.495 - 10724.073: 60.0666% ( 295) 00:09:47.318 10724.073 - 10783.651: 62.4573% ( 280) 00:09:47.318 10783.651 - 10843.229: 64.5577% ( 246) 00:09:47.318 10843.229 - 10902.807: 66.4276% ( 219) 00:09:47.318 10902.807 - 10962.385: 68.1182% ( 198) 00:09:47.318 10962.385 - 11021.964: 69.4928% ( 161) 00:09:47.318 11021.964 - 11081.542: 70.6370% ( 134) 00:09:47.318 11081.542 - 11141.120: 71.6530% ( 119) 00:09:47.318 11141.120 - 11200.698: 72.4812% ( 97) 00:09:47.318 11200.698 - 11260.276: 73.3948% ( 107) 00:09:47.318 11260.276 - 11319.855: 74.2657% ( 102) 00:09:47.318 11319.855 - 11379.433: 75.0256% ( 89) 00:09:47.318 11379.433 - 11439.011: 76.0246% ( 117) 00:09:47.318 11439.011 - 11498.589: 77.0663% ( 122) 00:09:47.318 11498.589 - 11558.167: 78.0311% ( 113) 00:09:47.318 11558.167 - 11617.745: 78.9447% ( 107) 00:09:47.318 11617.745 - 11677.324: 79.8070% ( 101) 00:09:47.318 11677.324 - 11736.902: 80.6182% ( 95) 00:09:47.318 11736.902 - 11796.480: 81.5147% ( 105) 00:09:47.318 11796.480 - 11856.058: 82.2917% ( 91) 00:09:47.318 11856.058 - 11915.636: 83.0686% ( 91) 00:09:47.318 11915.636 - 11975.215: 83.9652% ( 105) 00:09:47.318 11975.215 - 12034.793: 84.9556% ( 116) 00:09:47.318 12034.793 - 12094.371: 85.7070% ( 88) 00:09:47.318 12094.371 - 12153.949: 86.4925% ( 92) 00:09:47.318 12153.949 - 12213.527: 87.2353% ( 87) 00:09:47.318 12213.527 - 12273.105: 87.9867% ( 88) 00:09:47.318 12273.105 - 12332.684: 88.7722% ( 92) 00:09:47.318 12332.684 - 12392.262: 89.5321% ( 89) 00:09:47.318 12392.262 - 12451.840: 90.2322% ( 82) 00:09:47.318 12451.840 - 12511.418: 90.8470% ( 72) 00:09:47.318 12511.418 - 12570.996: 91.4788% ( 74) 00:09:47.318 12570.996 - 12630.575: 92.1533% ( 79) 00:09:47.318 12630.575 - 12690.153: 92.7169% ( 66) 00:09:47.318 12690.153 - 12749.731: 93.2804% ( 66) 00:09:47.318 12749.731 - 12809.309: 93.7244% ( 52) 00:09:47.318 12809.309 - 12868.887: 94.1855% ( 54) 00:09:47.318 12868.887 - 12928.465: 94.5867% ( 47) 00:09:47.318 12928.465 - 12988.044: 94.9197% ( 39) 00:09:47.318 12988.044 - 13047.622: 95.2613% ( 40) 00:09:47.318 13047.622 - 13107.200: 95.5943% ( 39) 00:09:47.318 13107.200 - 13166.778: 95.8931% ( 35) 00:09:47.318 13166.778 - 13226.356: 96.1663% ( 32) 00:09:47.318 13226.356 - 13285.935: 96.3883% ( 26) 00:09:47.318 13285.935 - 13345.513: 96.5847% ( 23) 00:09:47.318 13345.513 - 13405.091: 96.7384% ( 18) 00:09:47.318 13405.091 - 13464.669: 96.8494% ( 13) 00:09:47.318 13464.669 - 13524.247: 96.9604% ( 13) 00:09:47.318 13524.247 - 13583.825: 97.0628% ( 12) 00:09:47.318 13583.825 - 13643.404: 97.1482% ( 10) 00:09:47.319 13643.404 - 13702.982: 97.2251% ( 9) 00:09:47.319 13702.982 - 13762.560: 97.2934% ( 8) 00:09:47.319 13762.560 - 13822.138: 97.3617% ( 8) 00:09:47.319 13822.138 - 13881.716: 97.4214% ( 7) 00:09:47.319 13881.716 - 13941.295: 97.4898% ( 8) 00:09:47.319 13941.295 - 14000.873: 97.5666% ( 9) 00:09:47.319 14000.873 - 14060.451: 97.6178% ( 6) 00:09:47.319 14060.451 - 14120.029: 97.6434% ( 3) 00:09:47.319 14120.029 - 14179.607: 97.6605% ( 2) 00:09:47.319 14179.607 - 14239.185: 97.6861% ( 3) 00:09:47.319 14239.185 - 14298.764: 97.7117% ( 3) 00:09:47.319 14298.764 - 14358.342: 97.7374% ( 3) 00:09:47.319 14358.342 - 14417.920: 97.7630% ( 3) 00:09:47.319 14417.920 - 14477.498: 97.7886% ( 3) 00:09:47.319 14477.498 - 14537.076: 97.8142% ( 3) 00:09:47.319 14775.389 - 14834.967: 97.8227% ( 1) 00:09:47.319 15013.702 - 15073.280: 97.8569% ( 4) 00:09:47.319 15073.280 - 15132.858: 97.8825% ( 3) 00:09:47.319 15132.858 - 15192.436: 97.9167% ( 4) 00:09:47.319 15192.436 - 15252.015: 97.9764% ( 7) 00:09:47.319 15252.015 - 15371.171: 98.1557% ( 21) 00:09:47.319 15371.171 - 15490.327: 98.2753% ( 14) 00:09:47.319 15490.327 - 15609.484: 98.3607% ( 10) 00:09:47.319 15609.484 - 15728.640: 98.4290% ( 8) 00:09:47.319 15728.640 - 15847.796: 98.4973% ( 8) 00:09:47.319 15847.796 - 15966.953: 98.5827% ( 10) 00:09:47.319 15966.953 - 16086.109: 98.6595% ( 9) 00:09:47.319 16086.109 - 16205.265: 98.7534% ( 11) 00:09:47.319 16205.265 - 16324.422: 98.8473% ( 11) 00:09:47.319 16324.422 - 16443.578: 98.9071% ( 7) 00:09:47.319 28001.745 - 28120.902: 98.9242% ( 2) 00:09:47.319 28120.902 - 28240.058: 98.9498% ( 3) 00:09:47.319 28240.058 - 28359.215: 98.9839% ( 4) 00:09:47.319 28359.215 - 28478.371: 99.0181% ( 4) 00:09:47.319 28478.371 - 28597.527: 99.0608% ( 5) 00:09:47.319 28597.527 - 28716.684: 99.0949% ( 4) 00:09:47.319 28716.684 - 28835.840: 99.1291% ( 4) 00:09:47.319 28835.840 - 28954.996: 99.1633% ( 4) 00:09:47.319 28954.996 - 29074.153: 99.1974% ( 4) 00:09:47.319 29074.153 - 29193.309: 99.2316% ( 4) 00:09:47.319 29193.309 - 29312.465: 99.2572% ( 3) 00:09:47.319 29312.465 - 29431.622: 99.2913% ( 4) 00:09:47.319 29431.622 - 29550.778: 99.3255% ( 4) 00:09:47.319 29550.778 - 29669.935: 99.3596% ( 4) 00:09:47.319 29669.935 - 29789.091: 99.3938% ( 4) 00:09:47.319 29789.091 - 29908.247: 99.4279% ( 4) 00:09:47.319 29908.247 - 30027.404: 99.4536% ( 3) 00:09:47.319 35270.284 - 35508.596: 99.5133% ( 7) 00:09:47.319 35508.596 - 35746.909: 99.5816% ( 8) 00:09:47.319 35746.909 - 35985.222: 99.6499% ( 8) 00:09:47.319 35985.222 - 36223.535: 99.7097% ( 7) 00:09:47.319 36223.535 - 36461.847: 99.7865% ( 9) 00:09:47.319 36461.847 - 36700.160: 99.8548% ( 8) 00:09:47.319 36700.160 - 36938.473: 99.9232% ( 8) 00:09:47.319 36938.473 - 37176.785: 99.9829% ( 7) 00:09:47.319 37176.785 - 37415.098: 100.0000% ( 2) 00:09:47.319 00:09:47.319 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:47.319 ============================================================================== 00:09:47.319 Range in us Cumulative IO count 00:09:47.319 8579.258 - 8638.836: 0.0939% ( 11) 00:09:47.319 8638.836 - 8698.415: 0.2391% ( 17) 00:09:47.319 8698.415 - 8757.993: 0.4952% ( 30) 00:09:47.319 8757.993 - 8817.571: 0.9648% ( 55) 00:09:47.319 8817.571 - 8877.149: 1.6650% ( 82) 00:09:47.319 8877.149 - 8936.727: 2.5871% ( 108) 00:09:47.319 8936.727 - 8996.305: 3.7739% ( 139) 00:09:47.319 8996.305 - 9055.884: 4.9436% ( 137) 00:09:47.319 9055.884 - 9115.462: 6.3098% ( 160) 00:09:47.319 9115.462 - 9175.040: 7.9406% ( 191) 00:09:47.319 9175.040 - 9234.618: 9.1872% ( 146) 00:09:47.319 9234.618 - 9294.196: 10.5020% ( 154) 00:09:47.319 9294.196 - 9353.775: 11.7572% ( 147) 00:09:47.319 9353.775 - 9413.353: 13.2087% ( 170) 00:09:47.319 9413.353 - 9472.931: 14.5321% ( 155) 00:09:47.319 9472.931 - 9532.509: 15.7616% ( 144) 00:09:47.319 9532.509 - 9592.087: 17.2473% ( 174) 00:09:47.319 9592.087 - 9651.665: 18.7329% ( 174) 00:09:47.319 9651.665 - 9711.244: 20.0905% ( 159) 00:09:47.319 9711.244 - 9770.822: 21.6530% ( 183) 00:09:47.319 9770.822 - 9830.400: 23.4033% ( 205) 00:09:47.319 9830.400 - 9889.978: 25.3415% ( 227) 00:09:47.319 9889.978 - 9949.556: 27.6981% ( 276) 00:09:47.319 9949.556 - 10009.135: 29.8583% ( 253) 00:09:47.319 10009.135 - 10068.713: 32.0611% ( 258) 00:09:47.319 10068.713 - 10128.291: 34.5543% ( 292) 00:09:47.319 10128.291 - 10187.869: 36.9194% ( 277) 00:09:47.319 10187.869 - 10247.447: 39.2845% ( 277) 00:09:47.319 10247.447 - 10307.025: 42.0509% ( 324) 00:09:47.319 10307.025 - 10366.604: 44.6209% ( 301) 00:09:47.319 10366.604 - 10426.182: 47.2678% ( 310) 00:09:47.319 10426.182 - 10485.760: 49.8634% ( 304) 00:09:47.319 10485.760 - 10545.338: 52.4761% ( 306) 00:09:47.319 10545.338 - 10604.916: 54.9180% ( 286) 00:09:47.319 10604.916 - 10664.495: 57.3685% ( 287) 00:09:47.319 10664.495 - 10724.073: 59.8275% ( 288) 00:09:47.319 10724.073 - 10783.651: 62.1243% ( 269) 00:09:47.319 10783.651 - 10843.229: 64.1650% ( 239) 00:09:47.319 10843.229 - 10902.807: 65.9665% ( 211) 00:09:47.319 10902.807 - 10962.385: 67.7254% ( 206) 00:09:47.319 10962.385 - 11021.964: 69.3221% ( 187) 00:09:47.319 11021.964 - 11081.542: 70.6028% ( 150) 00:09:47.319 11081.542 - 11141.120: 71.7128% ( 130) 00:09:47.319 11141.120 - 11200.698: 72.6178% ( 106) 00:09:47.319 11200.698 - 11260.276: 73.5485% ( 109) 00:09:47.319 11260.276 - 11319.855: 74.4023% ( 100) 00:09:47.319 11319.855 - 11379.433: 75.2561% ( 100) 00:09:47.319 11379.433 - 11439.011: 76.2637% ( 118) 00:09:47.319 11439.011 - 11498.589: 77.0577% ( 93) 00:09:47.319 11498.589 - 11558.167: 77.9372% ( 103) 00:09:47.319 11558.167 - 11617.745: 78.7824% ( 99) 00:09:47.319 11617.745 - 11677.324: 79.5509% ( 90) 00:09:47.319 11677.324 - 11736.902: 80.3193% ( 90) 00:09:47.319 11736.902 - 11796.480: 81.2415% ( 108) 00:09:47.319 11796.480 - 11856.058: 82.1465% ( 106) 00:09:47.319 11856.058 - 11915.636: 83.0089% ( 101) 00:09:47.319 11915.636 - 11975.215: 83.9225% ( 107) 00:09:47.319 11975.215 - 12034.793: 84.8788% ( 112) 00:09:47.319 12034.793 - 12094.371: 85.7923% ( 107) 00:09:47.319 12094.371 - 12153.949: 86.5779% ( 92) 00:09:47.319 12153.949 - 12213.527: 87.3548% ( 91) 00:09:47.319 12213.527 - 12273.105: 88.1062% ( 88) 00:09:47.319 12273.105 - 12332.684: 88.9430% ( 98) 00:09:47.319 12332.684 - 12392.262: 89.7797% ( 98) 00:09:47.319 12392.262 - 12451.840: 90.4628% ( 80) 00:09:47.319 12451.840 - 12511.418: 91.1800% ( 84) 00:09:47.319 12511.418 - 12570.996: 91.8801% ( 82) 00:09:47.319 12570.996 - 12630.575: 92.4095% ( 62) 00:09:47.319 12630.575 - 12690.153: 92.9645% ( 65) 00:09:47.319 12690.153 - 12749.731: 93.4426% ( 56) 00:09:47.319 12749.731 - 12809.309: 93.8952% ( 53) 00:09:47.319 12809.309 - 12868.887: 94.3221% ( 50) 00:09:47.319 12868.887 - 12928.465: 94.7063% ( 45) 00:09:47.319 12928.465 - 12988.044: 95.0564% ( 41) 00:09:47.319 12988.044 - 13047.622: 95.3637% ( 36) 00:09:47.319 13047.622 - 13107.200: 95.6540% ( 34) 00:09:47.319 13107.200 - 13166.778: 95.9870% ( 39) 00:09:47.319 13166.778 - 13226.356: 96.2432% ( 30) 00:09:47.319 13226.356 - 13285.935: 96.4737% ( 27) 00:09:47.319 13285.935 - 13345.513: 96.6018% ( 15) 00:09:47.319 13345.513 - 13405.091: 96.7213% ( 14) 00:09:47.319 13405.091 - 13464.669: 96.8152% ( 11) 00:09:47.319 13464.669 - 13524.247: 96.9092% ( 11) 00:09:47.319 13524.247 - 13583.825: 96.9860% ( 9) 00:09:47.319 13583.825 - 13643.404: 97.0714% ( 10) 00:09:47.319 13643.404 - 13702.982: 97.1397% ( 8) 00:09:47.319 13702.982 - 13762.560: 97.2165% ( 9) 00:09:47.319 13762.560 - 13822.138: 97.2848% ( 8) 00:09:47.319 13822.138 - 13881.716: 97.3617% ( 9) 00:09:47.319 13881.716 - 13941.295: 97.4385% ( 9) 00:09:47.319 13941.295 - 14000.873: 97.5068% ( 8) 00:09:47.319 14000.873 - 14060.451: 97.5751% ( 8) 00:09:47.319 14060.451 - 14120.029: 97.6434% ( 8) 00:09:47.319 14120.029 - 14179.607: 97.6776% ( 4) 00:09:47.319 14179.607 - 14239.185: 97.6947% ( 2) 00:09:47.319 14239.185 - 14298.764: 97.7203% ( 3) 00:09:47.319 14298.764 - 14358.342: 97.7459% ( 3) 00:09:47.319 14358.342 - 14417.920: 97.7630% ( 2) 00:09:47.319 14417.920 - 14477.498: 97.7886% ( 3) 00:09:47.319 14477.498 - 14537.076: 97.8142% ( 3) 00:09:47.319 15073.280 - 15132.858: 97.8227% ( 1) 00:09:47.319 15132.858 - 15192.436: 97.8484% ( 3) 00:09:47.319 15192.436 - 15252.015: 97.9081% ( 7) 00:09:47.319 15252.015 - 15371.171: 98.0447% ( 16) 00:09:47.319 15371.171 - 15490.327: 98.2070% ( 19) 00:09:47.319 15490.327 - 15609.484: 98.3436% ( 16) 00:09:47.319 15609.484 - 15728.640: 98.4631% ( 14) 00:09:47.319 15728.640 - 15847.796: 98.5485% ( 10) 00:09:47.319 15847.796 - 15966.953: 98.6168% ( 8) 00:09:47.319 15966.953 - 16086.109: 98.7107% ( 11) 00:09:47.319 16086.109 - 16205.265: 98.7790% ( 8) 00:09:47.319 16205.265 - 16324.422: 98.8644% ( 10) 00:09:47.319 16324.422 - 16443.578: 98.8986% ( 4) 00:09:47.319 16443.578 - 16562.735: 98.9071% ( 1) 00:09:47.319 26333.556 - 26452.713: 98.9156% ( 1) 00:09:47.319 26452.713 - 26571.869: 98.9413% ( 3) 00:09:47.319 26571.869 - 26691.025: 98.9754% ( 4) 00:09:47.319 26691.025 - 26810.182: 99.0096% ( 4) 00:09:47.319 26810.182 - 26929.338: 99.0437% ( 4) 00:09:47.319 26929.338 - 27048.495: 99.0779% ( 4) 00:09:47.319 27048.495 - 27167.651: 99.1035% ( 3) 00:09:47.319 27167.651 - 27286.807: 99.1376% ( 4) 00:09:47.319 27286.807 - 27405.964: 99.1803% ( 5) 00:09:47.319 27405.964 - 27525.120: 99.2145% ( 4) 00:09:47.319 27525.120 - 27644.276: 99.2486% ( 4) 00:09:47.319 27644.276 - 27763.433: 99.2828% ( 4) 00:09:47.319 27763.433 - 27882.589: 99.3169% ( 4) 00:09:47.320 27882.589 - 28001.745: 99.3511% ( 4) 00:09:47.320 28001.745 - 28120.902: 99.3852% ( 4) 00:09:47.320 28120.902 - 28240.058: 99.4194% ( 4) 00:09:47.320 28240.058 - 28359.215: 99.4536% ( 4) 00:09:47.320 33602.095 - 33840.407: 99.4621% ( 1) 00:09:47.320 33840.407 - 34078.720: 99.5219% ( 7) 00:09:47.320 34078.720 - 34317.033: 99.5902% ( 8) 00:09:47.320 34317.033 - 34555.345: 99.6585% ( 8) 00:09:47.320 34555.345 - 34793.658: 99.7182% ( 7) 00:09:47.320 34793.658 - 35031.971: 99.7865% ( 8) 00:09:47.320 35031.971 - 35270.284: 99.8378% ( 6) 00:09:47.320 35270.284 - 35508.596: 99.9146% ( 9) 00:09:47.320 35508.596 - 35746.909: 99.9744% ( 7) 00:09:47.320 35746.909 - 35985.222: 100.0000% ( 3) 00:09:47.320 00:09:47.320 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:47.320 ============================================================================== 00:09:47.320 Range in us Cumulative IO count 00:09:47.320 8400.524 - 8460.102: 0.0085% ( 1) 00:09:47.320 8519.680 - 8579.258: 0.0171% ( 1) 00:09:47.320 8579.258 - 8638.836: 0.0683% ( 6) 00:09:47.320 8638.836 - 8698.415: 0.2220% ( 18) 00:09:47.320 8698.415 - 8757.993: 0.6148% ( 46) 00:09:47.320 8757.993 - 8817.571: 1.1527% ( 63) 00:09:47.320 8817.571 - 8877.149: 1.8528% ( 82) 00:09:47.320 8877.149 - 8936.727: 2.7408% ( 104) 00:09:47.320 8936.727 - 8996.305: 3.5946% ( 100) 00:09:47.320 8996.305 - 9055.884: 4.6448% ( 123) 00:09:47.320 9055.884 - 9115.462: 5.9853% ( 157) 00:09:47.320 9115.462 - 9175.040: 7.2490% ( 148) 00:09:47.320 9175.040 - 9234.618: 8.5553% ( 153) 00:09:47.320 9234.618 - 9294.196: 9.8702% ( 154) 00:09:47.320 9294.196 - 9353.775: 11.2449% ( 161) 00:09:47.320 9353.775 - 9413.353: 12.8842% ( 192) 00:09:47.320 9413.353 - 9472.931: 14.2162% ( 156) 00:09:47.320 9472.931 - 9532.509: 15.5225% ( 153) 00:09:47.320 9532.509 - 9592.087: 16.8460% ( 155) 00:09:47.320 9592.087 - 9651.665: 18.3487% ( 176) 00:09:47.320 9651.665 - 9711.244: 19.8685% ( 178) 00:09:47.320 9711.244 - 9770.822: 21.5079% ( 192) 00:09:47.320 9770.822 - 9830.400: 23.3094% ( 211) 00:09:47.320 9830.400 - 9889.978: 25.1110% ( 211) 00:09:47.320 9889.978 - 9949.556: 27.1773% ( 242) 00:09:47.320 9949.556 - 10009.135: 29.4740% ( 269) 00:09:47.320 10009.135 - 10068.713: 31.8904% ( 283) 00:09:47.320 10068.713 - 10128.291: 34.5116% ( 307) 00:09:47.320 10128.291 - 10187.869: 37.2439% ( 320) 00:09:47.320 10187.869 - 10247.447: 39.9334% ( 315) 00:09:47.320 10247.447 - 10307.025: 42.6486% ( 318) 00:09:47.320 10307.025 - 10366.604: 45.2442% ( 304) 00:09:47.320 10366.604 - 10426.182: 47.6008% ( 276) 00:09:47.320 10426.182 - 10485.760: 50.0085% ( 282) 00:09:47.320 10485.760 - 10545.338: 52.6810% ( 313) 00:09:47.320 10545.338 - 10604.916: 55.1571% ( 290) 00:09:47.320 10604.916 - 10664.495: 57.3770% ( 260) 00:09:47.320 10664.495 - 10724.073: 59.6226% ( 263) 00:09:47.320 10724.073 - 10783.651: 61.8169% ( 257) 00:09:47.320 10783.651 - 10843.229: 63.9515% ( 250) 00:09:47.320 10843.229 - 10902.807: 65.8128% ( 218) 00:09:47.320 10902.807 - 10962.385: 67.4693% ( 194) 00:09:47.320 10962.385 - 11021.964: 69.1855% ( 201) 00:09:47.320 11021.964 - 11081.542: 70.6626% ( 173) 00:09:47.320 11081.542 - 11141.120: 71.8835% ( 143) 00:09:47.320 11141.120 - 11200.698: 72.9423% ( 124) 00:09:47.320 11200.698 - 11260.276: 73.9583% ( 119) 00:09:47.320 11260.276 - 11319.855: 74.8548% ( 105) 00:09:47.320 11319.855 - 11379.433: 75.5806% ( 85) 00:09:47.320 11379.433 - 11439.011: 76.3576% ( 91) 00:09:47.320 11439.011 - 11498.589: 77.1858% ( 97) 00:09:47.320 11498.589 - 11558.167: 77.9628% ( 91) 00:09:47.320 11558.167 - 11617.745: 78.7739% ( 95) 00:09:47.320 11617.745 - 11677.324: 79.7473% ( 114) 00:09:47.320 11677.324 - 11736.902: 80.7462% ( 117) 00:09:47.320 11736.902 - 11796.480: 81.7025% ( 112) 00:09:47.320 11796.480 - 11856.058: 82.7100% ( 118) 00:09:47.320 11856.058 - 11915.636: 83.5980% ( 104) 00:09:47.320 11915.636 - 11975.215: 84.3665% ( 90) 00:09:47.320 11975.215 - 12034.793: 85.2971% ( 109) 00:09:47.320 12034.793 - 12094.371: 86.2022% ( 106) 00:09:47.320 12094.371 - 12153.949: 87.0304% ( 97) 00:09:47.320 12153.949 - 12213.527: 87.8074% ( 91) 00:09:47.320 12213.527 - 12273.105: 88.5075% ( 82) 00:09:47.320 12273.105 - 12332.684: 89.2589% ( 88) 00:09:47.320 12332.684 - 12392.262: 89.9676% ( 83) 00:09:47.320 12392.262 - 12451.840: 90.7018% ( 86) 00:09:47.320 12451.840 - 12511.418: 91.4788% ( 91) 00:09:47.320 12511.418 - 12570.996: 92.0594% ( 68) 00:09:47.320 12570.996 - 12630.575: 92.5546% ( 58) 00:09:47.320 12630.575 - 12690.153: 92.9730% ( 49) 00:09:47.320 12690.153 - 12749.731: 93.3572% ( 45) 00:09:47.320 12749.731 - 12809.309: 93.6561% ( 35) 00:09:47.320 12809.309 - 12868.887: 93.9293% ( 32) 00:09:47.320 12868.887 - 12928.465: 94.2025% ( 32) 00:09:47.320 12928.465 - 12988.044: 94.5611% ( 42) 00:09:47.320 12988.044 - 13047.622: 94.8600% ( 35) 00:09:47.320 13047.622 - 13107.200: 95.1673% ( 36) 00:09:47.320 13107.200 - 13166.778: 95.5260% ( 42) 00:09:47.320 13166.778 - 13226.356: 95.8419% ( 37) 00:09:47.320 13226.356 - 13285.935: 96.1066% ( 31) 00:09:47.320 13285.935 - 13345.513: 96.3115% ( 24) 00:09:47.320 13345.513 - 13405.091: 96.4993% ( 22) 00:09:47.320 13405.091 - 13464.669: 96.6103% ( 13) 00:09:47.320 13464.669 - 13524.247: 96.7298% ( 14) 00:09:47.320 13524.247 - 13583.825: 96.8494% ( 14) 00:09:47.320 13583.825 - 13643.404: 96.9433% ( 11) 00:09:47.320 13643.404 - 13702.982: 97.0458% ( 12) 00:09:47.320 13702.982 - 13762.560: 97.1311% ( 10) 00:09:47.320 13762.560 - 13822.138: 97.2251% ( 11) 00:09:47.320 13822.138 - 13881.716: 97.2934% ( 8) 00:09:47.320 13881.716 - 13941.295: 97.3531% ( 7) 00:09:47.320 13941.295 - 14000.873: 97.4300% ( 9) 00:09:47.320 14000.873 - 14060.451: 97.5154% ( 10) 00:09:47.320 14060.451 - 14120.029: 97.5751% ( 7) 00:09:47.320 14120.029 - 14179.607: 97.6434% ( 8) 00:09:47.320 14179.607 - 14239.185: 97.7117% ( 8) 00:09:47.320 14239.185 - 14298.764: 97.7544% ( 5) 00:09:47.320 14298.764 - 14358.342: 97.8057% ( 6) 00:09:47.320 14358.342 - 14417.920: 97.8142% ( 1) 00:09:47.320 15132.858 - 15192.436: 97.8484% ( 4) 00:09:47.320 15192.436 - 15252.015: 97.9081% ( 7) 00:09:47.320 15252.015 - 15371.171: 98.0191% ( 13) 00:09:47.320 15371.171 - 15490.327: 98.1301% ( 13) 00:09:47.320 15490.327 - 15609.484: 98.2240% ( 11) 00:09:47.320 15609.484 - 15728.640: 98.3265% ( 12) 00:09:47.320 15728.640 - 15847.796: 98.4460% ( 14) 00:09:47.320 15847.796 - 15966.953: 98.5741% ( 15) 00:09:47.320 15966.953 - 16086.109: 98.6595% ( 10) 00:09:47.320 16086.109 - 16205.265: 98.7363% ( 9) 00:09:47.320 16205.265 - 16324.422: 98.8046% ( 8) 00:09:47.320 16324.422 - 16443.578: 98.8388% ( 4) 00:09:47.320 16443.578 - 16562.735: 98.8730% ( 4) 00:09:47.320 16562.735 - 16681.891: 98.9071% ( 4) 00:09:47.320 24784.524 - 24903.680: 98.9242% ( 2) 00:09:47.320 24903.680 - 25022.836: 98.9583% ( 4) 00:09:47.320 25022.836 - 25141.993: 98.9925% ( 4) 00:09:47.320 25141.993 - 25261.149: 99.0096% ( 2) 00:09:47.320 25261.149 - 25380.305: 99.0437% ( 4) 00:09:47.320 25380.305 - 25499.462: 99.0693% ( 3) 00:09:47.320 25499.462 - 25618.618: 99.1035% ( 4) 00:09:47.320 25618.618 - 25737.775: 99.1291% ( 3) 00:09:47.320 25737.775 - 25856.931: 99.1547% ( 3) 00:09:47.320 25856.931 - 25976.087: 99.1889% ( 4) 00:09:47.320 25976.087 - 26095.244: 99.2230% ( 4) 00:09:47.320 26095.244 - 26214.400: 99.2572% ( 4) 00:09:47.320 26214.400 - 26333.556: 99.2913% ( 4) 00:09:47.320 26333.556 - 26452.713: 99.3169% ( 3) 00:09:47.320 26452.713 - 26571.869: 99.3511% ( 4) 00:09:47.320 26571.869 - 26691.025: 99.3852% ( 4) 00:09:47.320 26691.025 - 26810.182: 99.4194% ( 4) 00:09:47.320 26810.182 - 26929.338: 99.4536% ( 4) 00:09:47.320 32172.218 - 32410.531: 99.5048% ( 6) 00:09:47.320 32410.531 - 32648.844: 99.5731% ( 8) 00:09:47.320 32648.844 - 32887.156: 99.6329% ( 7) 00:09:47.320 32887.156 - 33125.469: 99.7012% ( 8) 00:09:47.320 33125.469 - 33363.782: 99.7695% ( 8) 00:09:47.320 33363.782 - 33602.095: 99.8378% ( 8) 00:09:47.320 33602.095 - 33840.407: 99.9061% ( 8) 00:09:47.320 33840.407 - 34078.720: 99.9744% ( 8) 00:09:47.320 34078.720 - 34317.033: 100.0000% ( 3) 00:09:47.320 00:09:47.320 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:47.320 ============================================================================== 00:09:47.320 Range in us Cumulative IO count 00:09:47.320 8460.102 - 8519.680: 0.0171% ( 2) 00:09:47.320 8579.258 - 8638.836: 0.0598% ( 5) 00:09:47.320 8638.836 - 8698.415: 0.2818% ( 26) 00:09:47.320 8698.415 - 8757.993: 0.6916% ( 48) 00:09:47.320 8757.993 - 8817.571: 1.2210% ( 62) 00:09:47.320 8817.571 - 8877.149: 1.8784% ( 77) 00:09:47.320 8877.149 - 8936.727: 2.8176% ( 110) 00:09:47.320 8936.727 - 8996.305: 3.8678% ( 123) 00:09:47.320 8996.305 - 9055.884: 4.8924% ( 120) 00:09:47.320 9055.884 - 9115.462: 6.1817% ( 151) 00:09:47.320 9115.462 - 9175.040: 7.3941% ( 142) 00:09:47.320 9175.040 - 9234.618: 8.6151% ( 143) 00:09:47.320 9234.618 - 9294.196: 9.8446% ( 144) 00:09:47.320 9294.196 - 9353.775: 11.1595% ( 154) 00:09:47.320 9353.775 - 9413.353: 12.6025% ( 169) 00:09:47.320 9413.353 - 9472.931: 14.0454% ( 169) 00:09:47.320 9472.931 - 9532.509: 15.4542% ( 165) 00:09:47.320 9532.509 - 9592.087: 16.9399% ( 174) 00:09:47.320 9592.087 - 9651.665: 18.5109% ( 184) 00:09:47.320 9651.665 - 9711.244: 20.0649% ( 182) 00:09:47.320 9711.244 - 9770.822: 21.8323% ( 207) 00:09:47.320 9770.822 - 9830.400: 23.5912% ( 206) 00:09:47.321 9830.400 - 9889.978: 25.5721% ( 232) 00:09:47.321 9889.978 - 9949.556: 27.4846% ( 224) 00:09:47.321 9949.556 - 10009.135: 29.6363% ( 252) 00:09:47.321 10009.135 - 10068.713: 31.8818% ( 263) 00:09:47.321 10068.713 - 10128.291: 34.3665% ( 291) 00:09:47.321 10128.291 - 10187.869: 36.8169% ( 287) 00:09:47.321 10187.869 - 10247.447: 39.2930% ( 290) 00:09:47.321 10247.447 - 10307.025: 41.9655% ( 313) 00:09:47.321 10307.025 - 10366.604: 44.2964% ( 273) 00:09:47.321 10366.604 - 10426.182: 46.7555% ( 288) 00:09:47.321 10426.182 - 10485.760: 49.1206% ( 277) 00:09:47.321 10485.760 - 10545.338: 51.6991% ( 302) 00:09:47.321 10545.338 - 10604.916: 54.4740% ( 325) 00:09:47.321 10604.916 - 10664.495: 56.8135% ( 274) 00:09:47.321 10664.495 - 10724.073: 59.1957% ( 279) 00:09:47.321 10724.073 - 10783.651: 61.3388% ( 251) 00:09:47.321 10783.651 - 10843.229: 63.3111% ( 231) 00:09:47.321 10843.229 - 10902.807: 65.1639% ( 217) 00:09:47.321 10902.807 - 10962.385: 66.7350% ( 184) 00:09:47.321 10962.385 - 11021.964: 68.2548% ( 178) 00:09:47.321 11021.964 - 11081.542: 69.6465% ( 163) 00:09:47.321 11081.542 - 11141.120: 70.8846% ( 145) 00:09:47.321 11141.120 - 11200.698: 71.7896% ( 106) 00:09:47.321 11200.698 - 11260.276: 72.7801% ( 116) 00:09:47.321 11260.276 - 11319.855: 73.7790% ( 117) 00:09:47.321 11319.855 - 11379.433: 74.6926% ( 107) 00:09:47.321 11379.433 - 11439.011: 75.5379% ( 99) 00:09:47.321 11439.011 - 11498.589: 76.4259% ( 104) 00:09:47.321 11498.589 - 11558.167: 77.3480% ( 108) 00:09:47.321 11558.167 - 11617.745: 78.3982% ( 123) 00:09:47.321 11617.745 - 11677.324: 79.3374% ( 110) 00:09:47.321 11677.324 - 11736.902: 80.3876% ( 123) 00:09:47.321 11736.902 - 11796.480: 81.4549% ( 125) 00:09:47.321 11796.480 - 11856.058: 82.4795% ( 120) 00:09:47.321 11856.058 - 11915.636: 83.5724% ( 128) 00:09:47.321 11915.636 - 11975.215: 84.5885% ( 119) 00:09:47.321 11975.215 - 12034.793: 85.5960% ( 118) 00:09:47.321 12034.793 - 12094.371: 86.5266% ( 109) 00:09:47.321 12094.371 - 12153.949: 87.4915% ( 113) 00:09:47.321 12153.949 - 12213.527: 88.3026% ( 95) 00:09:47.321 12213.527 - 12273.105: 89.0369% ( 86) 00:09:47.321 12273.105 - 12332.684: 89.8309% ( 93) 00:09:47.321 12332.684 - 12392.262: 90.6933% ( 101) 00:09:47.321 12392.262 - 12451.840: 91.3678% ( 79) 00:09:47.321 12451.840 - 12511.418: 92.0594% ( 81) 00:09:47.321 12511.418 - 12570.996: 92.6400% ( 68) 00:09:47.321 12570.996 - 12630.575: 93.1438% ( 59) 00:09:47.321 12630.575 - 12690.153: 93.4939% ( 41) 00:09:47.321 12690.153 - 12749.731: 93.8183% ( 38) 00:09:47.321 12749.731 - 12809.309: 94.0915% ( 32) 00:09:47.321 12809.309 - 12868.887: 94.3477% ( 30) 00:09:47.321 12868.887 - 12928.465: 94.6124% ( 31) 00:09:47.321 12928.465 - 12988.044: 94.9112% ( 35) 00:09:47.321 12988.044 - 13047.622: 95.1588% ( 29) 00:09:47.321 13047.622 - 13107.200: 95.4235% ( 31) 00:09:47.321 13107.200 - 13166.778: 95.6455% ( 26) 00:09:47.321 13166.778 - 13226.356: 95.8931% ( 29) 00:09:47.321 13226.356 - 13285.935: 96.1322% ( 28) 00:09:47.321 13285.935 - 13345.513: 96.3200% ( 22) 00:09:47.321 13345.513 - 13405.091: 96.4822% ( 19) 00:09:47.321 13405.091 - 13464.669: 96.6274% ( 17) 00:09:47.321 13464.669 - 13524.247: 96.7469% ( 14) 00:09:47.321 13524.247 - 13583.825: 96.8750% ( 15) 00:09:47.321 13583.825 - 13643.404: 96.9433% ( 8) 00:09:47.321 13643.404 - 13702.982: 97.0116% ( 8) 00:09:47.321 13702.982 - 13762.560: 97.1055% ( 11) 00:09:47.321 13762.560 - 13822.138: 97.1824% ( 9) 00:09:47.321 13822.138 - 13881.716: 97.2678% ( 10) 00:09:47.321 13881.716 - 13941.295: 97.3531% ( 10) 00:09:47.321 13941.295 - 14000.873: 97.4300% ( 9) 00:09:47.321 14000.873 - 14060.451: 97.4812% ( 6) 00:09:47.321 14060.451 - 14120.029: 97.5410% ( 7) 00:09:47.321 14120.029 - 14179.607: 97.6178% ( 9) 00:09:47.321 14179.607 - 14239.185: 97.6861% ( 8) 00:09:47.321 14239.185 - 14298.764: 97.7715% ( 10) 00:09:47.321 14298.764 - 14358.342: 97.8484% ( 9) 00:09:47.321 14358.342 - 14417.920: 97.9081% ( 7) 00:09:47.321 14417.920 - 14477.498: 97.9423% ( 4) 00:09:47.321 14477.498 - 14537.076: 97.9679% ( 3) 00:09:47.321 14537.076 - 14596.655: 97.9935% ( 3) 00:09:47.321 14596.655 - 14656.233: 98.0191% ( 3) 00:09:47.321 14656.233 - 14715.811: 98.0362% ( 2) 00:09:47.321 14715.811 - 14775.389: 98.0533% ( 2) 00:09:47.321 14775.389 - 14834.967: 98.0789% ( 3) 00:09:47.321 14834.967 - 14894.545: 98.0960% ( 2) 00:09:47.321 14894.545 - 14954.124: 98.1216% ( 3) 00:09:47.321 14954.124 - 15013.702: 98.1472% ( 3) 00:09:47.321 15013.702 - 15073.280: 98.1728% ( 3) 00:09:47.321 15073.280 - 15132.858: 98.1899% ( 2) 00:09:47.321 15132.858 - 15192.436: 98.2155% ( 3) 00:09:47.321 15192.436 - 15252.015: 98.2326% ( 2) 00:09:47.321 15252.015 - 15371.171: 98.2411% ( 1) 00:09:47.321 15371.171 - 15490.327: 98.2582% ( 2) 00:09:47.321 15490.327 - 15609.484: 98.3265% ( 8) 00:09:47.321 15609.484 - 15728.640: 98.3863% ( 7) 00:09:47.321 15728.640 - 15847.796: 98.4290% ( 5) 00:09:47.321 15847.796 - 15966.953: 98.4546% ( 3) 00:09:47.321 15966.953 - 16086.109: 98.5314% ( 9) 00:09:47.321 16086.109 - 16205.265: 98.6253% ( 11) 00:09:47.321 16205.265 - 16324.422: 98.7107% ( 10) 00:09:47.321 16324.422 - 16443.578: 98.7705% ( 7) 00:09:47.321 16443.578 - 16562.735: 98.8217% ( 6) 00:09:47.321 16562.735 - 16681.891: 98.8815% ( 7) 00:09:47.321 16681.891 - 16801.047: 98.9071% ( 3) 00:09:47.321 23592.960 - 23712.116: 98.9413% ( 4) 00:09:47.321 23712.116 - 23831.273: 99.0266% ( 10) 00:09:47.321 23831.273 - 23950.429: 99.1291% ( 12) 00:09:47.321 23950.429 - 24069.585: 99.1889% ( 7) 00:09:47.321 24069.585 - 24188.742: 99.2145% ( 3) 00:09:47.321 24188.742 - 24307.898: 99.2486% ( 4) 00:09:47.321 24307.898 - 24427.055: 99.2828% ( 4) 00:09:47.321 24427.055 - 24546.211: 99.3084% ( 3) 00:09:47.321 24546.211 - 24665.367: 99.3340% ( 3) 00:09:47.321 24665.367 - 24784.524: 99.3511% ( 2) 00:09:47.321 24784.524 - 24903.680: 99.3852% ( 4) 00:09:47.321 24903.680 - 25022.836: 99.4109% ( 3) 00:09:47.321 25022.836 - 25141.993: 99.4450% ( 4) 00:09:47.321 25141.993 - 25261.149: 99.4536% ( 1) 00:09:47.321 30504.029 - 30742.342: 99.4962% ( 5) 00:09:47.321 30742.342 - 30980.655: 99.5645% ( 8) 00:09:47.321 30980.655 - 31218.967: 99.6414% ( 9) 00:09:47.321 31218.967 - 31457.280: 99.7097% ( 8) 00:09:47.321 31457.280 - 31695.593: 99.7865% ( 9) 00:09:47.321 31695.593 - 31933.905: 99.8634% ( 9) 00:09:47.321 31933.905 - 32172.218: 99.9317% ( 8) 00:09:47.321 32172.218 - 32410.531: 100.0000% ( 8) 00:09:47.321 00:09:47.321 04:59:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:47.321 00:09:47.321 real 0m2.671s 00:09:47.321 user 0m2.286s 00:09:47.321 sys 0m0.273s 00:09:47.321 04:59:01 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.321 04:59:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:47.321 ************************************ 00:09:47.321 END TEST nvme_perf 00:09:47.321 ************************************ 00:09:47.321 04:59:01 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:47.321 04:59:01 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:47.321 04:59:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.321 04:59:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.321 ************************************ 00:09:47.321 START TEST nvme_hello_world 00:09:47.321 ************************************ 00:09:47.321 04:59:01 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:47.580 Initializing NVMe Controllers 00:09:47.580 Attached to 0000:00:10.0 00:09:47.580 Namespace ID: 1 size: 6GB 00:09:47.580 Attached to 0000:00:11.0 00:09:47.580 Namespace ID: 1 size: 5GB 00:09:47.580 Attached to 0000:00:13.0 00:09:47.580 Namespace ID: 1 size: 1GB 00:09:47.580 Attached to 0000:00:12.0 00:09:47.580 Namespace ID: 1 size: 4GB 00:09:47.580 Namespace ID: 2 size: 4GB 00:09:47.580 Namespace ID: 3 size: 4GB 00:09:47.580 Initialization complete. 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 INFO: using host memory buffer for IO 00:09:47.580 Hello world! 00:09:47.580 00:09:47.580 real 0m0.307s 00:09:47.580 user 0m0.122s 00:09:47.580 sys 0m0.140s 00:09:47.580 04:59:02 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.580 04:59:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:47.580 ************************************ 00:09:47.580 END TEST nvme_hello_world 00:09:47.580 ************************************ 00:09:47.580 04:59:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:47.580 04:59:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:47.580 04:59:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.580 04:59:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.580 ************************************ 00:09:47.580 START TEST nvme_sgl 00:09:47.580 ************************************ 00:09:47.580 04:59:02 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:47.839 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:47.839 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:47.839 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:47.839 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:47.839 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:47.839 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:47.839 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:47.839 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:47.839 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:48.097 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:48.097 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:48.097 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:48.097 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:48.097 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:48.097 NVMe Readv/Writev Request test 00:09:48.097 Attached to 0000:00:10.0 00:09:48.097 Attached to 0000:00:11.0 00:09:48.097 Attached to 0000:00:13.0 00:09:48.097 Attached to 0000:00:12.0 00:09:48.097 0000:00:10.0: build_io_request_2 test passed 00:09:48.097 0000:00:10.0: build_io_request_4 test passed 00:09:48.097 0000:00:10.0: build_io_request_5 test passed 00:09:48.097 0000:00:10.0: build_io_request_6 test passed 00:09:48.097 0000:00:10.0: build_io_request_7 test passed 00:09:48.097 0000:00:10.0: build_io_request_10 test passed 00:09:48.097 0000:00:11.0: build_io_request_2 test passed 00:09:48.097 0000:00:11.0: build_io_request_4 test passed 00:09:48.097 0000:00:11.0: build_io_request_5 test passed 00:09:48.097 0000:00:11.0: build_io_request_6 test passed 00:09:48.097 0000:00:11.0: build_io_request_7 test passed 00:09:48.097 0000:00:11.0: build_io_request_10 test passed 00:09:48.097 Cleaning up... 00:09:48.097 00:09:48.097 real 0m0.388s 00:09:48.097 user 0m0.202s 00:09:48.097 sys 0m0.140s 00:09:48.097 04:59:02 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.097 ************************************ 00:09:48.097 END TEST nvme_sgl 00:09:48.097 ************************************ 00:09:48.097 04:59:02 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:48.097 04:59:02 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:48.097 04:59:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.097 04:59:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.097 04:59:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.097 ************************************ 00:09:48.097 START TEST nvme_e2edp 00:09:48.097 ************************************ 00:09:48.097 04:59:02 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:48.356 NVMe Write/Read with End-to-End data protection test 00:09:48.356 Attached to 0000:00:10.0 00:09:48.356 Attached to 0000:00:11.0 00:09:48.356 Attached to 0000:00:13.0 00:09:48.356 Attached to 0000:00:12.0 00:09:48.356 Cleaning up... 00:09:48.356 00:09:48.356 real 0m0.288s 00:09:48.356 user 0m0.111s 00:09:48.356 sys 0m0.134s 00:09:48.356 04:59:02 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.356 04:59:02 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:48.356 ************************************ 00:09:48.356 END TEST nvme_e2edp 00:09:48.356 ************************************ 00:09:48.356 04:59:02 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:48.356 04:59:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.356 04:59:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.356 04:59:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.356 ************************************ 00:09:48.356 START TEST nvme_reserve 00:09:48.356 ************************************ 00:09:48.356 04:59:02 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:48.614 ===================================================== 00:09:48.614 NVMe Controller at PCI bus 0, device 16, function 0 00:09:48.614 ===================================================== 00:09:48.614 Reservations: Not Supported 00:09:48.614 ===================================================== 00:09:48.614 NVMe Controller at PCI bus 0, device 17, function 0 00:09:48.614 ===================================================== 00:09:48.614 Reservations: Not Supported 00:09:48.614 ===================================================== 00:09:48.614 NVMe Controller at PCI bus 0, device 19, function 0 00:09:48.614 ===================================================== 00:09:48.614 Reservations: Not Supported 00:09:48.614 ===================================================== 00:09:48.614 NVMe Controller at PCI bus 0, device 18, function 0 00:09:48.614 ===================================================== 00:09:48.614 Reservations: Not Supported 00:09:48.614 Reservation test passed 00:09:48.614 00:09:48.614 real 0m0.304s 00:09:48.614 user 0m0.112s 00:09:48.614 sys 0m0.145s 00:09:48.614 04:59:03 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.614 04:59:03 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:48.614 ************************************ 00:09:48.614 END TEST nvme_reserve 00:09:48.614 ************************************ 00:09:48.876 04:59:03 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:48.876 04:59:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.876 04:59:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.876 04:59:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.876 ************************************ 00:09:48.876 START TEST nvme_err_injection 00:09:48.876 ************************************ 00:09:48.876 04:59:03 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:49.135 NVMe Error Injection test 00:09:49.135 Attached to 0000:00:10.0 00:09:49.135 Attached to 0000:00:11.0 00:09:49.135 Attached to 0000:00:13.0 00:09:49.135 Attached to 0000:00:12.0 00:09:49.135 0000:00:10.0: get features failed as expected 00:09:49.135 0000:00:11.0: get features failed as expected 00:09:49.135 0000:00:13.0: get features failed as expected 00:09:49.135 0000:00:12.0: get features failed as expected 00:09:49.135 0000:00:13.0: get features successfully as expected 00:09:49.135 0000:00:12.0: get features successfully as expected 00:09:49.135 0000:00:10.0: get features successfully as expected 00:09:49.135 0000:00:11.0: get features successfully as expected 00:09:49.135 0000:00:11.0: read failed as expected 00:09:49.135 0000:00:10.0: read failed as expected 00:09:49.135 0000:00:13.0: read failed as expected 00:09:49.135 0000:00:12.0: read failed as expected 00:09:49.135 0000:00:10.0: read successfully as expected 00:09:49.135 0000:00:11.0: read successfully as expected 00:09:49.135 0000:00:13.0: read successfully as expected 00:09:49.135 0000:00:12.0: read successfully as expected 00:09:49.135 Cleaning up... 00:09:49.135 ************************************ 00:09:49.135 END TEST nvme_err_injection 00:09:49.135 ************************************ 00:09:49.135 00:09:49.135 real 0m0.312s 00:09:49.135 user 0m0.125s 00:09:49.135 sys 0m0.138s 00:09:49.135 04:59:03 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.135 04:59:03 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:49.135 04:59:03 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:49.135 04:59:03 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:49.135 04:59:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.135 04:59:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:49.135 ************************************ 00:09:49.135 START TEST nvme_overhead 00:09:49.135 ************************************ 00:09:49.135 04:59:03 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:50.512 Initializing NVMe Controllers 00:09:50.512 Attached to 0000:00:10.0 00:09:50.512 Attached to 0000:00:11.0 00:09:50.512 Attached to 0000:00:13.0 00:09:50.512 Attached to 0000:00:12.0 00:09:50.512 Initialization complete. Launching workers. 00:09:50.512 submit (in ns) avg, min, max = 16644.2, 13120.0, 67233.2 00:09:50.512 complete (in ns) avg, min, max = 11668.8, 8804.5, 104191.4 00:09:50.512 00:09:50.512 Submit histogram 00:09:50.512 ================ 00:09:50.512 Range in us Cumulative Count 00:09:50.512 13.091 - 13.149: 0.0118% ( 1) 00:09:50.512 13.440 - 13.498: 0.0236% ( 1) 00:09:50.512 13.498 - 13.556: 0.0353% ( 1) 00:09:50.512 13.556 - 13.615: 0.1178% ( 7) 00:09:50.512 13.615 - 13.673: 0.4122% ( 25) 00:09:50.512 13.673 - 13.731: 0.8361% ( 36) 00:09:50.512 13.731 - 13.789: 1.4366% ( 51) 00:09:50.512 13.789 - 13.847: 2.2256% ( 67) 00:09:50.512 13.847 - 13.905: 2.9086% ( 58) 00:09:50.512 13.905 - 13.964: 3.8860% ( 83) 00:09:50.512 13.964 - 14.022: 5.3344% ( 123) 00:09:50.512 14.022 - 14.080: 7.7249% ( 203) 00:09:50.512 14.080 - 14.138: 11.1163% ( 288) 00:09:50.512 14.138 - 14.196: 15.2967% ( 355) 00:09:50.512 14.196 - 14.255: 18.2642% ( 252) 00:09:50.512 14.255 - 14.313: 20.5016% ( 190) 00:09:50.512 14.313 - 14.371: 22.7390% ( 190) 00:09:50.512 14.371 - 14.429: 25.6477% ( 247) 00:09:50.512 14.429 - 14.487: 30.3580% ( 400) 00:09:50.512 14.487 - 14.545: 36.1399% ( 491) 00:09:50.512 14.545 - 14.604: 41.3095% ( 439) 00:09:50.512 14.604 - 14.662: 45.3132% ( 340) 00:09:50.512 14.662 - 14.720: 47.8450% ( 215) 00:09:50.512 14.720 - 14.778: 49.7763% ( 164) 00:09:50.512 14.778 - 14.836: 51.8370% ( 175) 00:09:50.512 14.836 - 14.895: 54.5337% ( 229) 00:09:50.512 14.895 - 15.011: 61.0339% ( 552) 00:09:50.512 15.011 - 15.127: 66.3330% ( 450) 00:09:50.512 15.127 - 15.244: 69.2181% ( 245) 00:09:50.512 15.244 - 15.360: 70.9020% ( 143) 00:09:50.512 15.360 - 15.476: 72.1032% ( 102) 00:09:50.512 15.476 - 15.593: 73.1159% ( 86) 00:09:50.512 15.593 - 15.709: 73.9049% ( 67) 00:09:50.512 15.709 - 15.825: 74.4112% ( 43) 00:09:50.512 15.825 - 15.942: 74.7174% ( 26) 00:09:50.512 15.942 - 16.058: 74.9882% ( 23) 00:09:50.512 16.058 - 16.175: 75.2120% ( 19) 00:09:50.512 16.175 - 16.291: 75.2591% ( 4) 00:09:50.512 16.291 - 16.407: 75.3533% ( 8) 00:09:50.512 16.407 - 16.524: 75.4593% ( 9) 00:09:50.512 16.524 - 16.640: 75.5417% ( 7) 00:09:50.512 16.640 - 16.756: 75.6006% ( 5) 00:09:50.512 16.756 - 16.873: 75.6241% ( 2) 00:09:50.512 16.873 - 16.989: 75.6359% ( 1) 00:09:50.512 17.105 - 17.222: 75.6712% ( 3) 00:09:50.512 17.222 - 17.338: 75.6830% ( 1) 00:09:50.512 17.338 - 17.455: 75.7654% ( 7) 00:09:50.512 17.455 - 17.571: 76.5073% ( 63) 00:09:50.512 17.571 - 17.687: 78.8625% ( 200) 00:09:50.512 17.687 - 17.804: 82.1479% ( 279) 00:09:50.512 17.804 - 17.920: 83.7847% ( 139) 00:09:50.512 17.920 - 18.036: 84.6208% ( 71) 00:09:50.512 18.036 - 18.153: 85.4216% ( 68) 00:09:50.512 18.153 - 18.269: 85.9868% ( 48) 00:09:50.512 18.269 - 18.385: 86.2812% ( 25) 00:09:50.512 18.385 - 18.502: 86.5049% ( 19) 00:09:50.512 18.502 - 18.618: 86.7405% ( 20) 00:09:50.512 18.618 - 18.735: 86.8935% ( 13) 00:09:50.512 18.735 - 18.851: 87.0349% ( 12) 00:09:50.512 18.851 - 18.967: 87.2468% ( 18) 00:09:50.512 18.967 - 19.084: 87.4706% ( 19) 00:09:50.512 19.084 - 19.200: 87.5648% ( 8) 00:09:50.512 19.200 - 19.316: 87.7061% ( 12) 00:09:50.512 19.316 - 19.433: 87.8945% ( 16) 00:09:50.512 19.433 - 19.549: 88.0358% ( 12) 00:09:50.512 19.549 - 19.665: 88.1300% ( 8) 00:09:50.512 19.665 - 19.782: 88.2595% ( 11) 00:09:50.512 19.782 - 19.898: 88.3773% ( 10) 00:09:50.512 19.898 - 20.015: 88.4597% ( 7) 00:09:50.512 20.015 - 20.131: 88.6128% ( 13) 00:09:50.512 20.131 - 20.247: 88.6835% ( 6) 00:09:50.512 20.247 - 20.364: 88.8012% ( 10) 00:09:50.512 20.364 - 20.480: 88.9308% ( 11) 00:09:50.512 20.480 - 20.596: 88.9779% ( 4) 00:09:50.512 20.596 - 20.713: 89.1074% ( 11) 00:09:50.512 20.713 - 20.829: 89.2134% ( 9) 00:09:50.512 20.829 - 20.945: 89.3311% ( 10) 00:09:50.512 20.945 - 21.062: 89.4607% ( 11) 00:09:50.512 21.062 - 21.178: 89.5549% ( 8) 00:09:50.512 21.178 - 21.295: 89.6138% ( 5) 00:09:50.512 21.295 - 21.411: 89.6491% ( 3) 00:09:50.512 21.411 - 21.527: 89.7433% ( 8) 00:09:50.512 21.527 - 21.644: 89.8375% ( 8) 00:09:50.512 21.644 - 21.760: 89.9670% ( 11) 00:09:50.512 21.760 - 21.876: 90.0259% ( 5) 00:09:50.512 21.876 - 21.993: 90.1908% ( 14) 00:09:50.512 21.993 - 22.109: 90.2261% ( 3) 00:09:50.512 22.109 - 22.225: 90.3085% ( 7) 00:09:50.512 22.225 - 22.342: 90.3674% ( 5) 00:09:50.512 22.342 - 22.458: 90.4145% ( 4) 00:09:50.512 22.458 - 22.575: 90.5087% ( 8) 00:09:50.512 22.575 - 22.691: 90.5676% ( 5) 00:09:50.512 22.691 - 22.807: 90.5794% ( 1) 00:09:50.512 22.807 - 22.924: 90.6147% ( 3) 00:09:50.512 23.040 - 23.156: 90.6854% ( 6) 00:09:50.512 23.156 - 23.273: 90.8149% ( 11) 00:09:50.512 23.273 - 23.389: 90.9209% ( 9) 00:09:50.512 23.389 - 23.505: 91.0033% ( 7) 00:09:50.512 23.505 - 23.622: 91.0740% ( 6) 00:09:50.512 23.622 - 23.738: 91.1211% ( 4) 00:09:50.512 23.738 - 23.855: 91.2153% ( 8) 00:09:50.512 23.855 - 23.971: 91.2741% ( 5) 00:09:50.512 23.971 - 24.087: 91.3095% ( 3) 00:09:50.512 24.087 - 24.204: 91.3566% ( 4) 00:09:50.512 24.204 - 24.320: 91.4037% ( 4) 00:09:50.512 24.320 - 24.436: 91.4390% ( 3) 00:09:50.512 24.436 - 24.553: 91.4743% ( 3) 00:09:50.512 24.553 - 24.669: 91.4861% ( 1) 00:09:50.512 24.669 - 24.785: 91.5214% ( 3) 00:09:50.512 24.902 - 25.018: 91.5450% ( 2) 00:09:50.512 25.018 - 25.135: 91.5921% ( 4) 00:09:50.512 25.135 - 25.251: 91.6156% ( 2) 00:09:50.512 25.251 - 25.367: 91.6274% ( 1) 00:09:50.512 25.484 - 25.600: 91.6627% ( 3) 00:09:50.512 25.600 - 25.716: 91.6981% ( 3) 00:09:50.512 25.716 - 25.833: 91.7334% ( 3) 00:09:50.512 25.833 - 25.949: 91.7687% ( 3) 00:09:50.512 25.949 - 26.065: 91.7923% ( 2) 00:09:50.512 26.065 - 26.182: 91.8512% ( 5) 00:09:50.512 26.182 - 26.298: 91.8865% ( 3) 00:09:50.512 26.415 - 26.531: 91.9218% ( 3) 00:09:50.512 26.531 - 26.647: 91.9689% ( 4) 00:09:50.512 26.647 - 26.764: 92.0396% ( 6) 00:09:50.512 26.764 - 26.880: 92.0749% ( 3) 00:09:50.512 26.880 - 26.996: 92.1220% ( 4) 00:09:50.512 26.996 - 27.113: 92.1809% ( 5) 00:09:50.512 27.113 - 27.229: 92.2044% ( 2) 00:09:50.512 27.229 - 27.345: 92.2633% ( 5) 00:09:50.512 27.345 - 27.462: 92.2986% ( 3) 00:09:50.512 27.462 - 27.578: 92.3340% ( 3) 00:09:50.512 27.578 - 27.695: 92.3693% ( 3) 00:09:50.512 27.695 - 27.811: 92.3811% ( 1) 00:09:50.512 27.927 - 28.044: 92.4164% ( 3) 00:09:50.512 28.044 - 28.160: 92.4753% ( 5) 00:09:50.512 28.160 - 28.276: 92.5459% ( 6) 00:09:50.512 28.276 - 28.393: 92.5930% ( 4) 00:09:50.512 28.393 - 28.509: 92.6519% ( 5) 00:09:50.512 28.509 - 28.625: 92.6872% ( 3) 00:09:50.512 28.625 - 28.742: 92.7461% ( 5) 00:09:50.512 28.742 - 28.858: 92.8756% ( 11) 00:09:50.512 28.858 - 28.975: 93.0052% ( 11) 00:09:50.512 28.975 - 29.091: 93.1936% ( 16) 00:09:50.512 29.091 - 29.207: 93.4880% ( 25) 00:09:50.512 29.207 - 29.324: 93.9237% ( 37) 00:09:50.512 29.324 - 29.440: 94.4889% ( 48) 00:09:50.512 29.440 - 29.556: 94.9717% ( 41) 00:09:50.512 29.556 - 29.673: 95.6901% ( 61) 00:09:50.512 29.673 - 29.789: 96.1729% ( 41) 00:09:50.512 29.789 - 30.022: 96.8912% ( 61) 00:09:50.512 30.022 - 30.255: 97.5977% ( 60) 00:09:50.512 30.255 - 30.487: 98.0217% ( 36) 00:09:50.512 30.487 - 30.720: 98.2454% ( 19) 00:09:50.512 30.720 - 30.953: 98.4927% ( 21) 00:09:50.512 30.953 - 31.185: 98.5751% ( 7) 00:09:50.512 31.185 - 31.418: 98.6693% ( 8) 00:09:50.512 31.418 - 31.651: 98.6811% ( 1) 00:09:50.512 31.651 - 31.884: 98.7164% ( 3) 00:09:50.512 31.884 - 32.116: 98.7282% ( 1) 00:09:50.512 32.116 - 32.349: 98.7871% ( 5) 00:09:50.512 32.349 - 32.582: 98.8460% ( 5) 00:09:50.512 32.582 - 32.815: 98.8695% ( 2) 00:09:50.512 32.815 - 33.047: 98.8813% ( 1) 00:09:50.512 33.047 - 33.280: 98.9402% ( 5) 00:09:50.512 33.280 - 33.513: 98.9520% ( 1) 00:09:50.512 33.745 - 33.978: 98.9637% ( 1) 00:09:50.512 33.978 - 34.211: 99.0108% ( 4) 00:09:50.512 34.211 - 34.444: 99.0933% ( 7) 00:09:50.512 34.444 - 34.676: 99.1168% ( 2) 00:09:50.512 34.676 - 34.909: 99.1404% ( 2) 00:09:50.513 34.909 - 35.142: 99.2346% ( 8) 00:09:50.513 35.142 - 35.375: 99.2817% ( 4) 00:09:50.513 35.375 - 35.607: 99.2935% ( 1) 00:09:50.513 35.607 - 35.840: 99.3406% ( 4) 00:09:50.513 35.840 - 36.073: 99.3523% ( 1) 00:09:50.513 36.073 - 36.305: 99.3877% ( 3) 00:09:50.513 36.305 - 36.538: 99.4583% ( 6) 00:09:50.513 36.771 - 37.004: 99.4819% ( 2) 00:09:50.513 37.004 - 37.236: 99.4936% ( 1) 00:09:50.513 37.236 - 37.469: 99.5290% ( 3) 00:09:50.513 37.469 - 37.702: 99.5643% ( 3) 00:09:50.513 37.702 - 37.935: 99.5996% ( 3) 00:09:50.513 37.935 - 38.167: 99.6114% ( 1) 00:09:50.513 38.167 - 38.400: 99.6232% ( 1) 00:09:50.513 38.400 - 38.633: 99.6350% ( 1) 00:09:50.513 38.633 - 38.865: 99.6467% ( 1) 00:09:50.513 39.098 - 39.331: 99.6703% ( 2) 00:09:50.513 39.796 - 40.029: 99.6821% ( 1) 00:09:50.513 41.658 - 41.891: 99.7056% ( 2) 00:09:50.513 42.124 - 42.356: 99.7174% ( 1) 00:09:50.513 42.589 - 42.822: 99.7292% ( 1) 00:09:50.513 43.520 - 43.753: 99.7409% ( 1) 00:09:50.513 43.753 - 43.985: 99.7527% ( 1) 00:09:50.513 44.218 - 44.451: 99.7763% ( 2) 00:09:50.513 44.451 - 44.684: 99.7998% ( 2) 00:09:50.513 44.684 - 44.916: 99.8351% ( 3) 00:09:50.513 44.916 - 45.149: 99.8587% ( 2) 00:09:50.513 45.847 - 46.080: 99.8705% ( 1) 00:09:50.513 46.080 - 46.313: 99.8822% ( 1) 00:09:50.513 47.709 - 47.942: 99.8940% ( 1) 00:09:50.513 50.036 - 50.269: 99.9058% ( 1) 00:09:50.513 50.269 - 50.502: 99.9176% ( 1) 00:09:50.513 50.735 - 50.967: 99.9293% ( 1) 00:09:50.513 51.665 - 51.898: 99.9411% ( 1) 00:09:50.513 52.364 - 52.596: 99.9529% ( 1) 00:09:50.513 52.829 - 53.062: 99.9647% ( 1) 00:09:50.513 53.062 - 53.295: 99.9764% ( 1) 00:09:50.513 53.760 - 53.993: 99.9882% ( 1) 00:09:50.513 67.025 - 67.491: 100.0000% ( 1) 00:09:50.513 00:09:50.513 Complete histogram 00:09:50.513 ================== 00:09:50.513 Range in us Cumulative Count 00:09:50.513 8.785 - 8.844: 0.0236% ( 2) 00:09:50.513 8.844 - 8.902: 0.1531% ( 11) 00:09:50.513 8.902 - 8.960: 0.3886% ( 20) 00:09:50.513 8.960 - 9.018: 0.6123% ( 19) 00:09:50.513 9.018 - 9.076: 0.9656% ( 30) 00:09:50.513 9.076 - 9.135: 2.5789% ( 137) 00:09:50.513 9.135 - 9.193: 5.5464% ( 252) 00:09:50.513 9.193 - 9.251: 8.5728% ( 257) 00:09:50.513 9.251 - 9.309: 11.3048% ( 232) 00:09:50.513 9.309 - 9.367: 14.1780% ( 244) 00:09:50.513 9.367 - 9.425: 18.6293% ( 378) 00:09:50.513 9.425 - 9.484: 25.5770% ( 590) 00:09:50.513 9.484 - 9.542: 31.9006% ( 537) 00:09:50.513 9.542 - 9.600: 36.1870% ( 364) 00:09:50.513 9.600 - 9.658: 38.9543% ( 235) 00:09:50.513 9.658 - 9.716: 42.9228% ( 337) 00:09:50.513 9.716 - 9.775: 48.2219% ( 450) 00:09:50.513 9.775 - 9.833: 53.5916% ( 456) 00:09:50.513 9.833 - 9.891: 57.6896% ( 348) 00:09:50.513 9.891 - 9.949: 60.4922% ( 238) 00:09:50.513 9.949 - 10.007: 62.4470% ( 166) 00:09:50.513 10.007 - 10.065: 63.9190% ( 125) 00:09:50.513 10.065 - 10.124: 65.2732% ( 115) 00:09:50.513 10.124 - 10.182: 66.4272% ( 98) 00:09:50.513 10.182 - 10.240: 66.9218% ( 42) 00:09:50.513 10.240 - 10.298: 67.4399% ( 44) 00:09:50.513 10.298 - 10.356: 67.9110% ( 40) 00:09:50.513 10.356 - 10.415: 68.6528% ( 63) 00:09:50.513 10.415 - 10.473: 69.2652% ( 52) 00:09:50.513 10.473 - 10.531: 70.0659% ( 68) 00:09:50.513 10.531 - 10.589: 70.7489% ( 58) 00:09:50.513 10.589 - 10.647: 71.5379% ( 67) 00:09:50.513 10.647 - 10.705: 72.1385% ( 51) 00:09:50.513 10.705 - 10.764: 72.5153% ( 32) 00:09:50.513 10.764 - 10.822: 72.7862% ( 23) 00:09:50.513 10.822 - 10.880: 73.0805% ( 25) 00:09:50.513 10.880 - 10.938: 73.2807% ( 17) 00:09:50.513 10.938 - 10.996: 73.4220% ( 12) 00:09:50.513 10.996 - 11.055: 73.5516% ( 11) 00:09:50.513 11.055 - 11.113: 73.6458% ( 8) 00:09:50.513 11.113 - 11.171: 73.7047% ( 5) 00:09:50.513 11.171 - 11.229: 73.7753% ( 6) 00:09:50.513 11.229 - 11.287: 73.8224% ( 4) 00:09:50.513 11.287 - 11.345: 73.8813% ( 5) 00:09:50.513 11.345 - 11.404: 73.9049% ( 2) 00:09:50.513 11.404 - 11.462: 73.9166% ( 1) 00:09:50.513 11.462 - 11.520: 73.9284% ( 1) 00:09:50.513 11.520 - 11.578: 73.9755% ( 4) 00:09:50.513 11.578 - 11.636: 74.0462% ( 6) 00:09:50.513 11.636 - 11.695: 74.3877% ( 29) 00:09:50.513 11.695 - 11.753: 75.2944% ( 77) 00:09:50.513 11.753 - 11.811: 77.6613% ( 201) 00:09:50.513 11.811 - 11.869: 80.0754% ( 205) 00:09:50.513 11.869 - 11.927: 82.3010% ( 189) 00:09:50.513 11.927 - 11.985: 83.6317% ( 113) 00:09:50.513 11.985 - 12.044: 84.1145% ( 41) 00:09:50.513 12.044 - 12.102: 84.5502% ( 37) 00:09:50.513 12.102 - 12.160: 84.7975% ( 21) 00:09:50.513 12.160 - 12.218: 84.9388% ( 12) 00:09:50.513 12.218 - 12.276: 85.0447% ( 9) 00:09:50.513 12.276 - 12.335: 85.1507% ( 9) 00:09:50.513 12.335 - 12.393: 85.2685% ( 10) 00:09:50.513 12.393 - 12.451: 85.3862% ( 10) 00:09:50.513 12.451 - 12.509: 85.5393% ( 13) 00:09:50.513 12.509 - 12.567: 85.6924% ( 13) 00:09:50.513 12.567 - 12.625: 85.9044% ( 18) 00:09:50.513 12.625 - 12.684: 86.0692% ( 14) 00:09:50.513 12.684 - 12.742: 86.2812% ( 18) 00:09:50.513 12.742 - 12.800: 86.4814% ( 17) 00:09:50.513 12.800 - 12.858: 86.8347% ( 30) 00:09:50.513 12.858 - 12.916: 87.0584% ( 19) 00:09:50.513 12.916 - 12.975: 87.2233% ( 14) 00:09:50.513 12.975 - 13.033: 87.4235% ( 17) 00:09:50.513 13.033 - 13.091: 87.5648% ( 12) 00:09:50.513 13.091 - 13.149: 87.6590% ( 8) 00:09:50.513 13.149 - 13.207: 87.7767% ( 10) 00:09:50.513 13.207 - 13.265: 87.8356% ( 5) 00:09:50.513 13.265 - 13.324: 87.8945% ( 5) 00:09:50.513 13.324 - 13.382: 87.9180% ( 2) 00:09:50.513 13.382 - 13.440: 87.9416% ( 2) 00:09:50.513 13.440 - 13.498: 88.0005% ( 5) 00:09:50.513 13.498 - 13.556: 88.0358% ( 3) 00:09:50.513 13.615 - 13.673: 88.0593% ( 2) 00:09:50.513 13.673 - 13.731: 88.0829% ( 2) 00:09:50.513 13.731 - 13.789: 88.1300% ( 4) 00:09:50.513 13.789 - 13.847: 88.1536% ( 2) 00:09:50.513 13.847 - 13.905: 88.1889% ( 3) 00:09:50.513 13.905 - 13.964: 88.2007% ( 1) 00:09:50.513 14.022 - 14.080: 88.2124% ( 1) 00:09:50.513 14.080 - 14.138: 88.2242% ( 1) 00:09:50.513 14.138 - 14.196: 88.2360% ( 1) 00:09:50.513 14.255 - 14.313: 88.2478% ( 1) 00:09:50.513 14.371 - 14.429: 88.2595% ( 1) 00:09:50.513 14.487 - 14.545: 88.2831% ( 2) 00:09:50.513 14.604 - 14.662: 88.3066% ( 2) 00:09:50.513 14.662 - 14.720: 88.3537% ( 4) 00:09:50.513 14.720 - 14.778: 88.3655% ( 1) 00:09:50.513 14.778 - 14.836: 88.4126% ( 4) 00:09:50.513 14.836 - 14.895: 88.4244% ( 1) 00:09:50.513 14.895 - 15.011: 88.5304% ( 9) 00:09:50.513 15.011 - 15.127: 88.5775% ( 4) 00:09:50.513 15.127 - 15.244: 88.6717% ( 8) 00:09:50.513 15.244 - 15.360: 88.7541% ( 7) 00:09:50.513 15.360 - 15.476: 88.8012% ( 4) 00:09:50.513 15.476 - 15.593: 88.9190% ( 10) 00:09:50.513 15.593 - 15.709: 89.0367% ( 10) 00:09:50.513 15.709 - 15.825: 89.1309% ( 8) 00:09:50.513 15.825 - 15.942: 89.3076% ( 15) 00:09:50.513 15.942 - 16.058: 89.3665% ( 5) 00:09:50.513 16.058 - 16.175: 89.4371% ( 6) 00:09:50.513 16.175 - 16.291: 89.5195% ( 7) 00:09:50.513 16.291 - 16.407: 89.5784% ( 5) 00:09:50.513 16.407 - 16.524: 89.6609% ( 7) 00:09:50.513 16.524 - 16.640: 89.7315% ( 6) 00:09:50.513 16.640 - 16.756: 89.8257% ( 8) 00:09:50.513 16.756 - 16.873: 89.8846% ( 5) 00:09:50.513 16.873 - 16.989: 89.9788% ( 8) 00:09:50.513 16.989 - 17.105: 90.0495% ( 6) 00:09:50.513 17.105 - 17.222: 90.1083% ( 5) 00:09:50.513 17.222 - 17.338: 90.1790% ( 6) 00:09:50.513 17.338 - 17.455: 90.2496% ( 6) 00:09:50.513 17.455 - 17.571: 90.2967% ( 4) 00:09:50.513 17.571 - 17.687: 90.3321% ( 3) 00:09:50.513 17.804 - 17.920: 90.3674% ( 3) 00:09:50.513 17.920 - 18.036: 90.3910% ( 2) 00:09:50.513 18.036 - 18.153: 90.4263% ( 3) 00:09:50.513 18.153 - 18.269: 90.4498% ( 2) 00:09:50.513 18.269 - 18.385: 90.4969% ( 4) 00:09:50.513 18.385 - 18.502: 90.5087% ( 1) 00:09:50.513 18.502 - 18.618: 90.5440% ( 3) 00:09:50.513 18.851 - 18.967: 90.5676% ( 2) 00:09:50.513 18.967 - 19.084: 90.5794% ( 1) 00:09:50.513 19.084 - 19.200: 90.5911% ( 1) 00:09:50.513 19.200 - 19.316: 90.6265% ( 3) 00:09:50.513 19.316 - 19.433: 90.6382% ( 1) 00:09:50.513 19.433 - 19.549: 90.6618% ( 2) 00:09:50.513 19.549 - 19.665: 90.7207% ( 5) 00:09:50.513 19.665 - 19.782: 90.7325% ( 1) 00:09:50.513 19.898 - 20.015: 90.7678% ( 3) 00:09:50.513 20.015 - 20.131: 90.7796% ( 1) 00:09:50.513 20.247 - 20.364: 90.8031% ( 2) 00:09:50.513 20.364 - 20.480: 90.8149% ( 1) 00:09:50.514 20.480 - 20.596: 90.8620% ( 4) 00:09:50.514 20.596 - 20.713: 90.8973% ( 3) 00:09:50.514 20.713 - 20.829: 90.9209% ( 2) 00:09:50.514 20.829 - 20.945: 90.9680% ( 4) 00:09:50.514 20.945 - 21.062: 91.0151% ( 4) 00:09:50.514 21.062 - 21.178: 91.0857% ( 6) 00:09:50.514 21.178 - 21.295: 91.1682% ( 7) 00:09:50.514 21.295 - 21.411: 91.1917% ( 2) 00:09:50.514 21.411 - 21.527: 91.2270% ( 3) 00:09:50.514 21.527 - 21.644: 91.2506% ( 2) 00:09:50.514 21.644 - 21.760: 91.3095% ( 5) 00:09:50.514 21.760 - 21.876: 91.3330% ( 2) 00:09:50.514 21.876 - 21.993: 91.3919% ( 5) 00:09:50.514 21.993 - 22.109: 91.4508% ( 5) 00:09:50.514 22.225 - 22.342: 91.4626% ( 1) 00:09:50.514 22.342 - 22.458: 91.4861% ( 2) 00:09:50.514 22.458 - 22.575: 91.4979% ( 1) 00:09:50.514 22.575 - 22.691: 91.5214% ( 2) 00:09:50.514 22.807 - 22.924: 91.5332% ( 1) 00:09:50.514 22.924 - 23.040: 91.5568% ( 2) 00:09:50.514 23.040 - 23.156: 91.5685% ( 1) 00:09:50.514 23.156 - 23.273: 91.5803% ( 1) 00:09:50.514 23.389 - 23.505: 91.5921% ( 1) 00:09:50.514 23.505 - 23.622: 91.6156% ( 2) 00:09:50.514 23.622 - 23.738: 91.6510% ( 3) 00:09:50.514 23.738 - 23.855: 91.7098% ( 5) 00:09:50.514 23.855 - 23.971: 91.8394% ( 11) 00:09:50.514 23.971 - 24.087: 92.1338% ( 25) 00:09:50.514 24.087 - 24.204: 92.5341% ( 34) 00:09:50.514 24.204 - 24.320: 93.0170% ( 41) 00:09:50.514 24.320 - 24.436: 93.6057% ( 50) 00:09:50.514 24.436 - 24.553: 94.2652% ( 56) 00:09:50.514 24.553 - 24.669: 95.0188% ( 64) 00:09:50.514 24.669 - 24.785: 95.7843% ( 65) 00:09:50.514 24.785 - 24.902: 96.5615% ( 66) 00:09:50.514 24.902 - 25.018: 97.0678% ( 43) 00:09:50.514 25.018 - 25.135: 97.5389% ( 40) 00:09:50.514 25.135 - 25.251: 97.8568% ( 27) 00:09:50.514 25.251 - 25.367: 98.1159% ( 22) 00:09:50.514 25.367 - 25.484: 98.2690% ( 13) 00:09:50.514 25.484 - 25.600: 98.4220% ( 13) 00:09:50.514 25.600 - 25.716: 98.5516% ( 11) 00:09:50.514 25.716 - 25.833: 98.6458% ( 8) 00:09:50.514 25.833 - 25.949: 98.6576% ( 1) 00:09:50.514 25.949 - 26.065: 98.7518% ( 8) 00:09:50.514 26.065 - 26.182: 98.8224% ( 6) 00:09:50.514 26.182 - 26.298: 98.8577% ( 3) 00:09:50.514 26.298 - 26.415: 98.9049% ( 4) 00:09:50.514 26.415 - 26.531: 98.9284% ( 2) 00:09:50.514 26.531 - 26.647: 98.9755% ( 4) 00:09:50.514 26.647 - 26.764: 98.9991% ( 2) 00:09:50.514 26.764 - 26.880: 99.0344% ( 3) 00:09:50.514 26.880 - 26.996: 99.0579% ( 2) 00:09:50.514 27.229 - 27.345: 99.0697% ( 1) 00:09:50.514 27.345 - 27.462: 99.1050% ( 3) 00:09:50.514 27.462 - 27.578: 99.1168% ( 1) 00:09:50.514 27.695 - 27.811: 99.1286% ( 1) 00:09:50.514 27.811 - 27.927: 99.1404% ( 1) 00:09:50.514 28.393 - 28.509: 99.1521% ( 1) 00:09:50.514 28.625 - 28.742: 99.1639% ( 1) 00:09:50.514 28.858 - 28.975: 99.1757% ( 1) 00:09:50.514 28.975 - 29.091: 99.1875% ( 1) 00:09:50.514 29.324 - 29.440: 99.1992% ( 1) 00:09:50.514 29.556 - 29.673: 99.2110% ( 1) 00:09:50.514 30.022 - 30.255: 99.2463% ( 3) 00:09:50.514 30.255 - 30.487: 99.2817% ( 3) 00:09:50.514 30.487 - 30.720: 99.3170% ( 3) 00:09:50.514 30.720 - 30.953: 99.3877% ( 6) 00:09:50.514 30.953 - 31.185: 99.3994% ( 1) 00:09:50.514 31.185 - 31.418: 99.4465% ( 4) 00:09:50.514 31.418 - 31.651: 99.4701% ( 2) 00:09:50.514 31.651 - 31.884: 99.5054% ( 3) 00:09:50.514 31.884 - 32.116: 99.5172% ( 1) 00:09:50.514 32.116 - 32.349: 99.5761% ( 5) 00:09:50.514 32.349 - 32.582: 99.6585% ( 7) 00:09:50.514 32.582 - 32.815: 99.6821% ( 2) 00:09:50.514 33.047 - 33.280: 99.7174% ( 3) 00:09:50.514 33.280 - 33.513: 99.7527% ( 3) 00:09:50.514 33.513 - 33.745: 99.7645% ( 1) 00:09:50.514 33.745 - 33.978: 99.7998% ( 3) 00:09:50.514 33.978 - 34.211: 99.8116% ( 1) 00:09:50.514 36.305 - 36.538: 99.8234% ( 1) 00:09:50.514 37.469 - 37.702: 99.8351% ( 1) 00:09:50.514 39.331 - 39.564: 99.8587% ( 2) 00:09:50.514 39.564 - 39.796: 99.8705% ( 1) 00:09:50.514 40.029 - 40.262: 99.8940% ( 2) 00:09:50.514 40.495 - 40.727: 99.9058% ( 1) 00:09:50.514 40.960 - 41.193: 99.9293% ( 2) 00:09:50.514 47.476 - 47.709: 99.9411% ( 1) 00:09:50.514 48.407 - 48.640: 99.9529% ( 1) 00:09:50.514 69.818 - 70.284: 99.9647% ( 1) 00:09:50.514 73.076 - 73.542: 99.9764% ( 1) 00:09:50.514 103.796 - 104.262: 100.0000% ( 2) 00:09:50.514 00:09:50.514 ************************************ 00:09:50.514 END TEST nvme_overhead 00:09:50.514 ************************************ 00:09:50.514 00:09:50.514 real 0m1.289s 00:09:50.514 user 0m1.114s 00:09:50.514 sys 0m0.129s 00:09:50.514 04:59:04 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.514 04:59:04 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:50.514 04:59:04 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:50.514 04:59:04 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:50.514 04:59:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.514 04:59:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.514 ************************************ 00:09:50.514 START TEST nvme_arbitration 00:09:50.514 ************************************ 00:09:50.514 04:59:04 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:53.801 Initializing NVMe Controllers 00:09:53.801 Attached to 0000:00:10.0 00:09:53.801 Attached to 0000:00:11.0 00:09:53.802 Attached to 0000:00:13.0 00:09:53.802 Attached to 0000:00:12.0 00:09:53.802 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:53.802 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:53.802 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:53.802 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:53.802 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:53.802 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:53.802 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:53.802 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:53.802 Initialization complete. Launching workers. 00:09:53.802 Starting thread on core 1 with urgent priority queue 00:09:53.802 Starting thread on core 2 with urgent priority queue 00:09:53.802 Starting thread on core 3 with urgent priority queue 00:09:53.802 Starting thread on core 0 with urgent priority queue 00:09:53.802 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:53.802 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:53.802 QEMU NVMe Ctrl (12341 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:09:53.802 QEMU NVMe Ctrl (12342 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:09:53.802 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:09:53.802 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:09:53.802 ======================================================== 00:09:53.802 00:09:53.802 00:09:53.802 real 0m3.428s 00:09:53.802 user 0m9.453s 00:09:53.802 sys 0m0.148s 00:09:53.802 04:59:08 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.802 ************************************ 00:09:53.802 END TEST nvme_arbitration 00:09:53.802 ************************************ 00:09:53.802 04:59:08 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:54.060 04:59:08 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:54.060 04:59:08 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:54.060 04:59:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.060 04:59:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.060 ************************************ 00:09:54.060 START TEST nvme_single_aen 00:09:54.060 ************************************ 00:09:54.060 04:59:08 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:54.320 Asynchronous Event Request test 00:09:54.320 Attached to 0000:00:10.0 00:09:54.320 Attached to 0000:00:11.0 00:09:54.320 Attached to 0000:00:13.0 00:09:54.320 Attached to 0000:00:12.0 00:09:54.320 Reset controller to setup AER completions for this process 00:09:54.320 Registering asynchronous event callbacks... 00:09:54.320 Getting orig temperature thresholds of all controllers 00:09:54.320 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:54.320 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:54.320 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:54.320 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:54.320 Setting all controllers temperature threshold low to trigger AER 00:09:54.320 Waiting for all controllers temperature threshold to be set lower 00:09:54.320 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:54.320 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:54.320 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:54.320 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:54.320 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:54.320 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:54.320 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:54.320 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:54.320 Waiting for all controllers to trigger AER and reset threshold 00:09:54.320 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.320 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.320 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.320 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.320 Cleaning up... 00:09:54.320 ************************************ 00:09:54.320 END TEST nvme_single_aen 00:09:54.320 ************************************ 00:09:54.320 00:09:54.320 real 0m0.290s 00:09:54.320 user 0m0.099s 00:09:54.320 sys 0m0.142s 00:09:54.320 04:59:08 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.320 04:59:08 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:54.320 04:59:08 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:54.320 04:59:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:54.320 04:59:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.320 04:59:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.320 ************************************ 00:09:54.320 START TEST nvme_doorbell_aers 00:09:54.320 ************************************ 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # bdfs=() 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # local bdfs 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:54.320 04:59:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:54.579 [2024-07-24 04:59:09.091319] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:04.562 Executing: test_write_invalid_db 00:10:04.562 Waiting for AER completion... 00:10:04.562 Failure: test_write_invalid_db 00:10:04.562 00:10:04.562 Executing: test_invalid_db_write_overflow_sq 00:10:04.562 Waiting for AER completion... 00:10:04.562 Failure: test_invalid_db_write_overflow_sq 00:10:04.562 00:10:04.562 Executing: test_invalid_db_write_overflow_cq 00:10:04.562 Waiting for AER completion... 00:10:04.562 Failure: test_invalid_db_write_overflow_cq 00:10:04.562 00:10:04.562 04:59:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:04.562 04:59:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:04.562 [2024-07-24 04:59:19.142968] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:14.623 Executing: test_write_invalid_db 00:10:14.623 Waiting for AER completion... 00:10:14.623 Failure: test_write_invalid_db 00:10:14.623 00:10:14.623 Executing: test_invalid_db_write_overflow_sq 00:10:14.623 Waiting for AER completion... 00:10:14.623 Failure: test_invalid_db_write_overflow_sq 00:10:14.623 00:10:14.623 Executing: test_invalid_db_write_overflow_cq 00:10:14.623 Waiting for AER completion... 00:10:14.623 Failure: test_invalid_db_write_overflow_cq 00:10:14.623 00:10:14.623 04:59:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:14.623 04:59:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:14.623 [2024-07-24 04:59:29.236431] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:24.636 Executing: test_write_invalid_db 00:10:24.636 Waiting for AER completion... 00:10:24.636 Failure: test_write_invalid_db 00:10:24.636 00:10:24.636 Executing: test_invalid_db_write_overflow_sq 00:10:24.636 Waiting for AER completion... 00:10:24.636 Failure: test_invalid_db_write_overflow_sq 00:10:24.636 00:10:24.636 Executing: test_invalid_db_write_overflow_cq 00:10:24.636 Waiting for AER completion... 00:10:24.636 Failure: test_invalid_db_write_overflow_cq 00:10:24.636 00:10:24.636 04:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:24.636 04:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:24.895 [2024-07-24 04:59:39.276128] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 Executing: test_write_invalid_db 00:10:34.873 Waiting for AER completion... 00:10:34.873 Failure: test_write_invalid_db 00:10:34.873 00:10:34.873 Executing: test_invalid_db_write_overflow_sq 00:10:34.873 Waiting for AER completion... 00:10:34.873 Failure: test_invalid_db_write_overflow_sq 00:10:34.873 00:10:34.873 Executing: test_invalid_db_write_overflow_cq 00:10:34.873 Waiting for AER completion... 00:10:34.873 Failure: test_invalid_db_write_overflow_cq 00:10:34.873 00:10:34.873 ************************************ 00:10:34.873 END TEST nvme_doorbell_aers 00:10:34.873 ************************************ 00:10:34.873 00:10:34.873 real 0m40.252s 00:10:34.873 user 0m34.082s 00:10:34.873 sys 0m5.822s 00:10:34.873 04:59:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.873 04:59:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:34.873 04:59:49 nvme -- nvme/nvme.sh@97 -- # uname 00:10:34.873 04:59:49 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:34.873 04:59:49 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:34.873 04:59:49 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:34.873 04:59:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.873 04:59:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.873 ************************************ 00:10:34.873 START TEST nvme_multi_aen 00:10:34.873 ************************************ 00:10:34.873 04:59:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:34.873 [2024-07-24 04:59:49.304374] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.304505] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.304526] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.306399] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.306467] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.306486] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.308059] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.308168] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.308351] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.309989] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.310205] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 [2024-07-24 04:59:49.310382] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69538) is not found. Dropping the request. 00:10:34.873 Child process pid: 70054 00:10:35.134 [Child] Asynchronous Event Request test 00:10:35.134 [Child] Attached to 0000:00:10.0 00:10:35.134 [Child] Attached to 0000:00:11.0 00:10:35.134 [Child] Attached to 0000:00:13.0 00:10:35.134 [Child] Attached to 0000:00:12.0 00:10:35.134 [Child] Registering asynchronous event callbacks... 00:10:35.134 [Child] Getting orig temperature thresholds of all controllers 00:10:35.134 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.134 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.134 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.134 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.134 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:35.134 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.134 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 [Child] Cleaning up... 00:10:35.135 Asynchronous Event Request test 00:10:35.135 Attached to 0000:00:10.0 00:10:35.135 Attached to 0000:00:11.0 00:10:35.135 Attached to 0000:00:13.0 00:10:35.135 Attached to 0000:00:12.0 00:10:35.135 Reset controller to setup AER completions for this process 00:10:35.135 Registering asynchronous event callbacks... 00:10:35.135 Getting orig temperature thresholds of all controllers 00:10:35.135 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.135 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.135 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.135 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:35.135 Setting all controllers temperature threshold low to trigger AER 00:10:35.135 Waiting for all controllers temperature threshold to be set lower 00:10:35.135 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:35.135 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:35.135 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:35.135 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:35.135 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:35.135 Waiting for all controllers to trigger AER and reset threshold 00:10:35.135 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.135 Cleaning up... 00:10:35.135 ************************************ 00:10:35.135 END TEST nvme_multi_aen 00:10:35.135 ************************************ 00:10:35.135 00:10:35.135 real 0m0.544s 00:10:35.135 user 0m0.197s 00:10:35.135 sys 0m0.229s 00:10:35.135 04:59:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.135 04:59:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 04:59:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:35.135 04:59:49 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:35.135 04:59:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.135 04:59:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 ************************************ 00:10:35.135 START TEST nvme_startup 00:10:35.135 ************************************ 00:10:35.135 04:59:49 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:35.400 Initializing NVMe Controllers 00:10:35.400 Attached to 0000:00:10.0 00:10:35.400 Attached to 0000:00:11.0 00:10:35.400 Attached to 0000:00:13.0 00:10:35.400 Attached to 0000:00:12.0 00:10:35.400 Initialization complete. 00:10:35.400 Time used:157025.812 (us). 00:10:35.400 ************************************ 00:10:35.400 END TEST nvme_startup 00:10:35.400 ************************************ 00:10:35.400 00:10:35.400 real 0m0.239s 00:10:35.400 user 0m0.077s 00:10:35.400 sys 0m0.118s 00:10:35.400 04:59:49 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.400 04:59:49 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:35.400 04:59:49 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:35.400 04:59:49 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:35.400 04:59:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.400 04:59:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.400 ************************************ 00:10:35.400 START TEST nvme_multi_secondary 00:10:35.400 ************************************ 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70110 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70111 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:35.400 04:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:39.589 Initializing NVMe Controllers 00:10:39.589 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:39.589 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:39.589 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:39.589 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:39.589 Initialization complete. Launching workers. 00:10:39.589 ======================================================== 00:10:39.589 Latency(us) 00:10:39.589 Device Information : IOPS MiB/s Average min max 00:10:39.589 PCIE (0000:00:10.0) NSID 1 from core 1: 5220.09 20.39 3063.11 1506.62 6715.20 00:10:39.589 PCIE (0000:00:11.0) NSID 1 from core 1: 5220.09 20.39 3064.10 1466.41 6851.93 00:10:39.589 PCIE (0000:00:13.0) NSID 1 from core 1: 5220.09 20.39 3063.91 1346.79 7188.53 00:10:39.589 PCIE (0000:00:12.0) NSID 1 from core 1: 5220.09 20.39 3063.60 1443.28 7156.32 00:10:39.589 PCIE (0000:00:12.0) NSID 2 from core 1: 5220.09 20.39 3063.35 1436.30 6787.34 00:10:39.589 PCIE (0000:00:12.0) NSID 3 from core 1: 5220.09 20.39 3063.41 1431.23 6433.38 00:10:39.589 ======================================================== 00:10:39.589 Total : 31320.51 122.35 3063.58 1346.79 7188.53 00:10:39.589 00:10:39.589 Initializing NVMe Controllers 00:10:39.589 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:39.589 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:39.589 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:39.589 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:39.589 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:39.589 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:39.589 Initialization complete. Launching workers. 00:10:39.589 ======================================================== 00:10:39.589 Latency(us) 00:10:39.589 Device Information : IOPS MiB/s Average min max 00:10:39.589 PCIE (0000:00:10.0) NSID 1 from core 2: 2515.77 9.83 6357.28 1350.63 13440.49 00:10:39.589 PCIE (0000:00:11.0) NSID 1 from core 2: 2515.77 9.83 6359.15 1540.22 14071.26 00:10:39.589 PCIE (0000:00:13.0) NSID 1 from core 2: 2515.77 9.83 6359.05 1494.84 13991.89 00:10:39.589 PCIE (0000:00:12.0) NSID 1 from core 2: 2515.77 9.83 6357.72 1600.62 13090.87 00:10:39.589 PCIE (0000:00:12.0) NSID 2 from core 2: 2515.77 9.83 6367.06 1603.88 13755.90 00:10:39.589 PCIE (0000:00:12.0) NSID 3 from core 2: 2515.77 9.83 6366.75 1594.58 13953.07 00:10:39.589 ======================================================== 00:10:39.589 Total : 15094.65 58.96 6361.17 1350.63 14071.26 00:10:39.589 00:10:39.589 04:59:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70110 00:10:40.964 Initializing NVMe Controllers 00:10:40.964 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:40.964 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:40.964 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:40.964 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:40.964 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:40.964 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:40.964 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:40.964 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:40.964 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:40.964 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:40.964 Initialization complete. Launching workers. 00:10:40.964 ======================================================== 00:10:40.964 Latency(us) 00:10:40.964 Device Information : IOPS MiB/s Average min max 00:10:40.964 PCIE (0000:00:10.0) NSID 1 from core 0: 7903.17 30.87 2023.00 993.05 6045.03 00:10:40.964 PCIE (0000:00:11.0) NSID 1 from core 0: 7903.17 30.87 2024.01 1011.64 6114.76 00:10:40.964 PCIE (0000:00:13.0) NSID 1 from core 0: 7903.17 30.87 2023.95 961.56 6193.60 00:10:40.964 PCIE (0000:00:12.0) NSID 1 from core 0: 7903.17 30.87 2023.90 883.09 6048.13 00:10:40.964 PCIE (0000:00:12.0) NSID 2 from core 0: 7906.37 30.88 2023.03 828.44 6108.53 00:10:40.964 PCIE (0000:00:12.0) NSID 3 from core 0: 7906.37 30.88 2022.97 767.13 5922.90 00:10:40.964 ======================================================== 00:10:40.964 Total : 47425.45 185.26 2023.48 767.13 6193.60 00:10:40.964 00:10:40.964 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70111 00:10:40.964 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70180 00:10:40.964 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:40.964 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70181 00:10:40.964 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:40.965 04:59:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:44.249 Initializing NVMe Controllers 00:10:44.249 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.249 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:44.249 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:44.249 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:44.249 Initialization complete. Launching workers. 00:10:44.249 ======================================================== 00:10:44.249 Latency(us) 00:10:44.249 Device Information : IOPS MiB/s Average min max 00:10:44.249 PCIE (0000:00:10.0) NSID 1 from core 1: 5243.55 20.48 3049.60 1063.97 9124.76 00:10:44.249 PCIE (0000:00:11.0) NSID 1 from core 1: 5248.88 20.50 3047.74 1092.49 6770.79 00:10:44.249 PCIE (0000:00:13.0) NSID 1 from core 1: 5243.55 20.48 3050.77 1084.70 7626.52 00:10:44.249 PCIE (0000:00:12.0) NSID 1 from core 1: 5243.55 20.48 3051.02 1094.39 8065.35 00:10:44.249 PCIE (0000:00:12.0) NSID 2 from core 1: 5243.55 20.48 3051.07 1087.13 8193.10 00:10:44.249 PCIE (0000:00:12.0) NSID 3 from core 1: 5243.55 20.48 3051.00 1097.09 8501.67 00:10:44.249 ======================================================== 00:10:44.249 Total : 31466.64 122.92 3050.20 1063.97 9124.76 00:10:44.249 00:10:44.249 Initializing NVMe Controllers 00:10:44.249 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.249 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.249 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:44.249 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:44.249 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:44.249 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:44.249 Initialization complete. Launching workers. 00:10:44.249 ======================================================== 00:10:44.249 Latency(us) 00:10:44.249 Device Information : IOPS MiB/s Average min max 00:10:44.249 PCIE (0000:00:10.0) NSID 1 from core 0: 5220.97 20.39 3062.72 998.13 8193.26 00:10:44.249 PCIE (0000:00:11.0) NSID 1 from core 0: 5220.97 20.39 3063.95 1030.18 8446.45 00:10:44.249 PCIE (0000:00:13.0) NSID 1 from core 0: 5220.97 20.39 3063.87 912.09 8893.35 00:10:44.249 PCIE (0000:00:12.0) NSID 1 from core 0: 5226.30 20.42 3060.65 881.11 6590.51 00:10:44.249 PCIE (0000:00:12.0) NSID 2 from core 0: 5220.97 20.39 3063.66 851.55 8030.53 00:10:44.249 PCIE (0000:00:12.0) NSID 3 from core 0: 5220.97 20.39 3063.58 804.33 7895.34 00:10:44.249 ======================================================== 00:10:44.249 Total : 31331.15 122.39 3063.07 804.33 8893.35 00:10:44.249 00:10:46.154 Initializing NVMe Controllers 00:10:46.154 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:46.154 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:46.154 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:46.154 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:46.154 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:46.154 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:46.154 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:46.154 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:46.154 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:46.154 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:46.154 Initialization complete. Launching workers. 00:10:46.154 ======================================================== 00:10:46.154 Latency(us) 00:10:46.154 Device Information : IOPS MiB/s Average min max 00:10:46.154 PCIE (0000:00:10.0) NSID 1 from core 2: 3624.37 14.16 4412.88 1022.91 14294.64 00:10:46.154 PCIE (0000:00:11.0) NSID 1 from core 2: 3627.57 14.17 4409.75 1022.70 18581.73 00:10:46.154 PCIE (0000:00:13.0) NSID 1 from core 2: 3627.57 14.17 4409.95 1025.84 18623.21 00:10:46.154 PCIE (0000:00:12.0) NSID 1 from core 2: 3624.37 14.16 4413.07 1036.84 14364.05 00:10:46.154 PCIE (0000:00:12.0) NSID 2 from core 2: 3624.37 14.16 4412.95 974.65 13924.37 00:10:46.154 PCIE (0000:00:12.0) NSID 3 from core 2: 3624.37 14.16 4408.90 880.70 14230.45 00:10:46.154 ======================================================== 00:10:46.155 Total : 21752.61 84.97 4411.25 880.70 18623.21 00:10:46.155 00:10:46.155 ************************************ 00:10:46.155 END TEST nvme_multi_secondary 00:10:46.155 ************************************ 00:10:46.155 05:00:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70180 00:10:46.155 05:00:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70181 00:10:46.155 00:10:46.155 real 0m10.699s 00:10:46.155 user 0m18.608s 00:10:46.155 sys 0m0.937s 00:10:46.155 05:00:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.155 05:00:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:46.155 05:00:00 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:46.155 05:00:00 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:46.155 05:00:00 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69124 ]] 00:10:46.155 05:00:00 nvme -- common/autotest_common.sh@1088 -- # kill 69124 00:10:46.155 05:00:00 nvme -- common/autotest_common.sh@1089 -- # wait 69124 00:10:46.155 [2024-07-24 05:00:00.741931] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.742016] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.742048] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.742078] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.744610] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.744703] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.744730] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.744772] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.747314] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.747376] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.747404] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.747432] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.749903] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.749985] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.750015] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.155 [2024-07-24 05:00:00.750046] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70053) is not found. Dropping the request. 00:10:46.412 [2024-07-24 05:00:01.022120] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:10:46.412 05:00:01 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:10:46.412 05:00:01 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:10:46.412 05:00:01 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:46.412 05:00:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:46.412 05:00:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.412 05:00:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.670 ************************************ 00:10:46.670 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:46.670 ************************************ 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:46.670 * Looking for test storage... 00:10:46.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1522 -- # bdfs=() 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1522 -- # local bdfs 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # bdfs=() 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # local bdfs 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70335 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70335 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 70335 ']' 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.670 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.671 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.671 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.671 05:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.928 [2024-07-24 05:00:01.305406] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:46.928 [2024-07-24 05:00:01.305614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70335 ] 00:10:46.928 [2024-07-24 05:00:01.496178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.185 [2024-07-24 05:00:01.736510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.185 [2024-07-24 05:00:01.736628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.185 [2024-07-24 05:00:01.736738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.185 [2024-07-24 05:00:01.736751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:48.115 nvme0n1 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WzXkw.txt 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:48.115 true 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721797202 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70362 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:48.115 05:00:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:50.031 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:50.031 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.031 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:50.031 [2024-07-24 05:00:04.556258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:50.031 [2024-07-24 05:00:04.556730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:50.031 [2024-07-24 05:00:04.556773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:50.031 [2024-07-24 05:00:04.556796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.031 [2024-07-24 05:00:04.558948] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.032 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70362 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70362 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70362 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WzXkw.txt 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:50.032 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WzXkw.txt 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70335 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 70335 ']' 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 70335 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70335 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:50.291 killing process with pid 70335 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70335' 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 70335 00:10:50.291 05:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 70335 00:10:52.193 05:00:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:52.193 05:00:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:52.193 00:10:52.193 real 0m5.580s 00:10:52.193 user 0m19.321s 00:10:52.193 sys 0m0.545s 00:10:52.193 05:00:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.193 05:00:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:52.193 ************************************ 00:10:52.193 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:52.193 ************************************ 00:10:52.193 05:00:06 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:52.193 05:00:06 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:52.193 05:00:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.193 05:00:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.193 05:00:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.193 ************************************ 00:10:52.193 START TEST nvme_fio 00:10:52.193 ************************************ 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # bdfs=() 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # local bdfs 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:10:52.193 05:00:06 nvme.nvme_fio -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:52.193 05:00:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:52.452 05:00:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:52.452 05:00:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:52.714 05:00:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:52.714 05:00:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local sanitizers 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # shift 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local asan_lib= 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # grep libasan 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # break 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:52.714 05:00:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:52.973 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:52.973 fio-3.35 00:10:52.973 Starting 1 thread 00:10:56.264 00:10:56.264 test: (groupid=0, jobs=1): err= 0: pid=70505: Wed Jul 24 05:00:10 2024 00:10:56.264 read: IOPS=14.9k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec) 00:10:56.264 slat (nsec): min=4029, max=92734, avg=6419.72, stdev=2854.73 00:10:56.264 clat (usec): min=246, max=9216, avg=4283.22, stdev=658.30 00:10:56.264 lat (usec): min=251, max=9228, avg=4289.64, stdev=659.04 00:10:56.264 clat percentiles (usec): 00:10:56.264 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3785], 00:10:56.264 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:10:56.264 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5342], 00:10:56.264 | 99.00th=[ 6587], 99.50th=[ 7635], 99.90th=[ 8717], 99.95th=[ 8979], 00:10:56.264 | 99.99th=[ 9110] 00:10:56.264 bw ( KiB/s): min=56832, max=60456, per=98.68%, avg=58638.00, stdev=1812.03, samples=3 00:10:56.264 iops : min=14208, max=15114, avg=14659.33, stdev=453.01, samples=3 00:10:56.264 write: IOPS=14.9k, BW=58.0MiB/s (60.9MB/s)(116MiB/2001msec); 0 zone resets 00:10:56.264 slat (nsec): min=4078, max=47706, avg=6621.12, stdev=2885.19 00:10:56.264 clat (usec): min=255, max=12236, avg=4299.18, stdev=687.28 00:10:56.264 lat (usec): min=260, max=12247, avg=4305.80, stdev=688.05 00:10:56.264 clat percentiles (usec): 00:10:56.264 | 1.00th=[ 2900], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3785], 00:10:56.264 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4359], 00:10:56.264 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5342], 00:10:56.264 | 99.00th=[ 6783], 99.50th=[ 7767], 99.90th=[ 9634], 99.95th=[11207], 00:10:56.264 | 99.99th=[12256] 00:10:56.264 bw ( KiB/s): min=56704, max=59792, per=98.37%, avg=58470.00, stdev=1591.16, samples=3 00:10:56.265 iops : min=14176, max=14948, avg=14617.33, stdev=397.72, samples=3 00:10:56.265 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:10:56.265 lat (msec) : 2=0.11%, 4=32.93%, 10=66.87%, 20=0.04% 00:10:56.265 cpu : usr=98.80%, sys=0.15%, ctx=4, majf=0, minf=608 00:10:56.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.265 issued rwts: total=29725,29733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.265 00:10:56.265 Run status group 0 (all jobs): 00:10:56.265 READ: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:10:56.265 WRITE: bw=58.0MiB/s (60.9MB/s), 58.0MiB/s-58.0MiB/s (60.9MB/s-60.9MB/s), io=116MiB (122MB), run=2001-2001msec 00:10:56.265 ----------------------------------------------------- 00:10:56.265 Suppressions used: 00:10:56.265 count bytes template 00:10:56.265 1 32 /usr/src/fio/parse.c 00:10:56.265 1 8 libtcmalloc_minimal.so 00:10:56.265 ----------------------------------------------------- 00:10:56.265 00:10:56.265 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:56.265 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:56.265 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:56.265 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:56.524 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:56.524 05:00:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:56.782 05:00:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:56.782 05:00:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local sanitizers 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # shift 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local asan_lib= 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # grep libasan 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # break 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:56.782 05:00:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:56.782 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:56.782 fio-3.35 00:10:56.782 Starting 1 thread 00:11:00.064 00:11:00.064 test: (groupid=0, jobs=1): err= 0: pid=70570: Wed Jul 24 05:00:14 2024 00:11:00.064 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec) 00:11:00.064 slat (nsec): min=4032, max=78123, avg=6563.06, stdev=3160.00 00:11:00.064 clat (usec): min=383, max=10184, avg=4336.63, stdev=618.26 00:11:00.064 lat (usec): min=390, max=10262, avg=4343.19, stdev=619.07 00:11:00.064 clat percentiles (usec): 00:11:00.064 | 1.00th=[ 3523], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3851], 00:11:00.064 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4424], 00:11:00.064 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5276], 00:11:00.064 | 99.00th=[ 5735], 99.50th=[ 7635], 99.90th=[ 9241], 99.95th=[ 9372], 00:11:00.064 | 99.99th=[10159] 00:11:00.064 bw ( KiB/s): min=51145, max=63240, per=100.00%, avg=58792.33, stdev=6652.12, samples=3 00:11:00.064 iops : min=12786, max=15810, avg=14698.00, stdev=1663.17, samples=3 00:11:00.064 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(115MiB/2001msec); 0 zone resets 00:11:00.064 slat (nsec): min=4406, max=62441, avg=6772.61, stdev=3246.32 00:11:00.064 clat (usec): min=295, max=10051, avg=4336.74, stdev=605.83 00:11:00.064 lat (usec): min=302, max=10066, avg=4343.51, stdev=606.64 00:11:00.064 clat percentiles (usec): 00:11:00.064 | 1.00th=[ 3523], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3851], 00:11:00.064 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4424], 00:11:00.064 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5276], 00:11:00.064 | 99.00th=[ 5669], 99.50th=[ 7439], 99.90th=[ 9241], 99.95th=[ 9241], 00:11:00.064 | 99.99th=[ 9765] 00:11:00.064 bw ( KiB/s): min=51385, max=62760, per=99.61%, avg=58621.67, stdev=6288.67, samples=3 00:11:00.064 iops : min=12846, max=15690, avg=14655.33, stdev=1572.31, samples=3 00:11:00.064 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:00.064 lat (msec) : 2=0.05%, 4=33.59%, 10=66.33%, 20=0.01% 00:11:00.064 cpu : usr=98.85%, sys=0.15%, ctx=4, majf=0, minf=607 00:11:00.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:00.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.064 issued rwts: total=29405,29439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.064 00:11:00.064 Run status group 0 (all jobs): 00:11:00.064 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (120MB), run=2001-2001msec 00:11:00.064 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=115MiB (121MB), run=2001-2001msec 00:11:00.064 ----------------------------------------------------- 00:11:00.064 Suppressions used: 00:11:00.064 count bytes template 00:11:00.064 1 32 /usr/src/fio/parse.c 00:11:00.064 1 8 libtcmalloc_minimal.so 00:11:00.064 ----------------------------------------------------- 00:11:00.064 00:11:00.064 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:00.064 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:00.064 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:00.064 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:00.323 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:00.323 05:00:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:00.582 05:00:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:00.582 05:00:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local sanitizers 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # shift 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local asan_lib= 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # grep libasan 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # break 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:00.582 05:00:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:00.841 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:00.841 fio-3.35 00:11:00.841 Starting 1 thread 00:11:04.133 00:11:04.133 test: (groupid=0, jobs=1): err= 0: pid=70630: Wed Jul 24 05:00:18 2024 00:11:04.133 read: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec) 00:11:04.133 slat (nsec): min=4230, max=51305, avg=6404.43, stdev=3405.94 00:11:04.133 clat (usec): min=607, max=10397, avg=4390.13, stdev=541.85 00:11:04.133 lat (usec): min=614, max=10436, avg=4396.54, stdev=542.40 00:11:04.133 clat percentiles (usec): 00:11:04.133 | 1.00th=[ 3359], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3949], 00:11:04.133 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:11:04.133 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5211], 00:11:04.133 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7635], 99.95th=[ 8455], 00:11:04.133 | 99.99th=[10421] 00:11:04.133 bw ( KiB/s): min=55000, max=63184, per=100.00%, avg=58269.33, stdev=4332.99, samples=3 00:11:04.133 iops : min=13750, max=15796, avg=14567.33, stdev=1083.25, samples=3 00:11:04.133 write: IOPS=14.5k, BW=56.8MiB/s (59.5MB/s)(114MiB/2001msec); 0 zone resets 00:11:04.133 slat (nsec): min=4294, max=83149, avg=6559.06, stdev=3498.92 00:11:04.133 clat (usec): min=348, max=10323, avg=4395.36, stdev=563.03 00:11:04.133 lat (usec): min=355, max=10334, avg=4401.92, stdev=563.53 00:11:04.133 clat percentiles (usec): 00:11:04.133 | 1.00th=[ 3294], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3949], 00:11:04.133 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:11:04.133 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5211], 00:11:04.133 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 9241], 99.95th=[10028], 00:11:04.133 | 99.99th=[10028] 00:11:04.133 bw ( KiB/s): min=55352, max=62304, per=100.00%, avg=58160.00, stdev=3663.50, samples=3 00:11:04.133 iops : min=13838, max=15576, avg=14540.00, stdev=915.88, samples=3 00:11:04.133 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:04.133 lat (msec) : 2=0.04%, 4=22.74%, 10=77.16%, 20=0.03% 00:11:04.133 cpu : usr=98.80%, sys=0.15%, ctx=5, majf=0, minf=608 00:11:04.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:04.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.133 issued rwts: total=29039,29088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.133 00:11:04.133 Run status group 0 (all jobs): 00:11:04.133 READ: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:11:04.133 WRITE: bw=56.8MiB/s (59.5MB/s), 56.8MiB/s-56.8MiB/s (59.5MB/s-59.5MB/s), io=114MiB (119MB), run=2001-2001msec 00:11:04.133 ----------------------------------------------------- 00:11:04.133 Suppressions used: 00:11:04.133 count bytes template 00:11:04.133 1 32 /usr/src/fio/parse.c 00:11:04.133 1 8 libtcmalloc_minimal.so 00:11:04.133 ----------------------------------------------------- 00:11:04.133 00:11:04.133 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:04.133 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:04.133 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:04.133 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:04.423 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:04.423 05:00:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:04.690 05:00:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.690 05:00:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local sanitizers 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # shift 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local asan_lib= 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # grep libasan 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # break 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.690 05:00:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:04.953 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:04.953 fio-3.35 00:11:04.953 Starting 1 thread 00:11:09.140 00:11:09.140 test: (groupid=0, jobs=1): err= 0: pid=70686: Wed Jul 24 05:00:23 2024 00:11:09.140 read: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(121MiB/2001msec) 00:11:09.140 slat (nsec): min=4136, max=52584, avg=6087.88, stdev=2713.80 00:11:09.140 clat (usec): min=254, max=8949, avg=4113.33, stdev=455.11 00:11:09.140 lat (usec): min=260, max=8960, avg=4119.42, stdev=455.49 00:11:09.140 clat percentiles (usec): 00:11:09.140 | 1.00th=[ 2835], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:11:09.140 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:11:09.140 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:11:09.140 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 8094], 99.95th=[ 8717], 00:11:09.140 | 99.99th=[ 8848] 00:11:09.140 bw ( KiB/s): min=60800, max=64440, per=100.00%, avg=63112.00, stdev=2009.62, samples=3 00:11:09.140 iops : min=15200, max=16110, avg=15778.00, stdev=502.41, samples=3 00:11:09.140 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(121MiB/2001msec); 0 zone resets 00:11:09.140 slat (nsec): min=4330, max=52690, avg=6317.94, stdev=2762.85 00:11:09.140 clat (usec): min=275, max=9979, avg=4102.70, stdev=481.36 00:11:09.140 lat (usec): min=281, max=9989, avg=4109.02, stdev=481.82 00:11:09.140 clat percentiles (usec): 00:11:09.140 | 1.00th=[ 2835], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3785], 00:11:09.140 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:11:09.140 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:11:09.140 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 8717], 99.95th=[ 9372], 00:11:09.140 | 99.99th=[ 9896] 00:11:09.140 bw ( KiB/s): min=61080, max=63800, per=100.00%, avg=62792.00, stdev=1490.41, samples=3 00:11:09.140 iops : min=15270, max=15950, avg=15698.00, stdev=372.60, samples=3 00:11:09.140 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:09.140 lat (msec) : 2=0.06%, 4=42.78%, 10=57.11% 00:11:09.140 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=606 00:11:09.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:09.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.140 issued rwts: total=31051,31058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.141 00:11:09.141 Run status group 0 (all jobs): 00:11:09.141 READ: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:11:09.141 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:11:09.141 ----------------------------------------------------- 00:11:09.141 Suppressions used: 00:11:09.141 count bytes template 00:11:09.141 1 32 /usr/src/fio/parse.c 00:11:09.141 1 8 libtcmalloc_minimal.so 00:11:09.141 ----------------------------------------------------- 00:11:09.141 00:11:09.141 05:00:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:09.141 05:00:23 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:09.141 00:11:09.141 real 0m16.793s 00:11:09.141 user 0m13.320s 00:11:09.141 sys 0m2.441s 00:11:09.141 05:00:23 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.141 ************************************ 00:11:09.141 END TEST nvme_fio 00:11:09.141 ************************************ 00:11:09.141 05:00:23 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:09.141 00:11:09.141 real 1m29.758s 00:11:09.141 user 3m42.741s 00:11:09.141 sys 0m14.319s 00:11:09.141 05:00:23 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.141 05:00:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:09.141 ************************************ 00:11:09.141 END TEST nvme 00:11:09.141 ************************************ 00:11:09.141 05:00:23 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:09.141 05:00:23 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:09.141 05:00:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:09.141 05:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.141 05:00:23 -- common/autotest_common.sh@10 -- # set +x 00:11:09.141 ************************************ 00:11:09.141 START TEST nvme_scc 00:11:09.141 ************************************ 00:11:09.141 05:00:23 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:09.141 * Looking for test storage... 00:11:09.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:09.141 05:00:23 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.141 05:00:23 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.141 05:00:23 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.141 05:00:23 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.141 05:00:23 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.141 05:00:23 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.141 05:00:23 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.141 05:00:23 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:09.141 05:00:23 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:09.141 05:00:23 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:09.141 05:00:23 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.141 05:00:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:09.141 05:00:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:09.141 05:00:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:09.141 05:00:23 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.656 Waiting for block devices as requested 00:11:09.656 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.913 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.913 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.913 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:15.189 05:00:29 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:15.189 05:00:29 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:15.189 05:00:29 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:15.189 05:00:29 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:15.189 05:00:29 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:15.189 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:15.190 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:15.191 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.192 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:15.193 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:15.194 05:00:29 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:15.194 05:00:29 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:15.194 05:00:29 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:15.194 05:00:29 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:15.194 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.195 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:15.196 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:15.197 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.198 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:15.199 05:00:29 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:15.199 05:00:29 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:15.199 05:00:29 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:15.199 05:00:29 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.199 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:15.200 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.489 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:15.490 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.491 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:15.492 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:15.493 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.494 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:15.495 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:15.496 05:00:29 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:15.496 05:00:29 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:15.496 05:00:29 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:15.496 05:00:29 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.496 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.497 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.498 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:15.499 05:00:29 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:15.499 05:00:29 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:15.500 05:00:29 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:15.500 05:00:30 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:15.500 05:00:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:15.500 05:00:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:15.500 05:00:30 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:16.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.635 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.635 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.635 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.635 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.635 05:00:31 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:16.635 05:00:31 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:16.635 05:00:31 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.635 05:00:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:16.635 ************************************ 00:11:16.635 START TEST nvme_simple_copy 00:11:16.635 ************************************ 00:11:16.635 05:00:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:17.202 Initializing NVMe Controllers 00:11:17.202 Attaching to 0000:00:10.0 00:11:17.202 Controller supports SCC. Attached to 0000:00:10.0 00:11:17.202 Namespace ID: 1 size: 6GB 00:11:17.202 Initialization complete. 00:11:17.202 00:11:17.202 Controller QEMU NVMe Ctrl (12340 ) 00:11:17.202 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:17.202 Namespace Block Size:4096 00:11:17.202 Writing LBAs 0 to 63 with Random Data 00:11:17.202 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:17.202 LBAs matching Written Data: 64 00:11:17.202 00:11:17.202 real 0m0.313s 00:11:17.202 user 0m0.130s 00:11:17.202 sys 0m0.081s 00:11:17.203 05:00:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.203 05:00:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:17.203 ************************************ 00:11:17.203 END TEST nvme_simple_copy 00:11:17.203 ************************************ 00:11:17.203 00:11:17.203 real 0m8.036s 00:11:17.203 user 0m1.285s 00:11:17.203 sys 0m1.726s 00:11:17.203 05:00:31 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.203 05:00:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:17.203 ************************************ 00:11:17.203 END TEST nvme_scc 00:11:17.203 ************************************ 00:11:17.203 05:00:31 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:17.203 05:00:31 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:17.203 05:00:31 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:17.203 05:00:31 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:17.203 05:00:31 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:17.203 05:00:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.203 05:00:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.203 05:00:31 -- common/autotest_common.sh@10 -- # set +x 00:11:17.203 ************************************ 00:11:17.203 START TEST nvme_fdp 00:11:17.203 ************************************ 00:11:17.203 05:00:31 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:11:17.203 * Looking for test storage... 00:11:17.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:17.203 05:00:31 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.203 05:00:31 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.203 05:00:31 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.203 05:00:31 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.203 05:00:31 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.203 05:00:31 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.203 05:00:31 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.203 05:00:31 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:17.203 05:00:31 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:17.203 05:00:31 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:17.203 05:00:31 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.203 05:00:31 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.770 Waiting for block devices as requested 00:11:17.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.029 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.029 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.304 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.304 05:00:37 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:23.304 05:00:37 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:23.304 05:00:37 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.304 05:00:37 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.304 05:00:37 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.304 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.305 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.306 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:23.307 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.308 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:23.309 05:00:37 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:23.309 05:00:37 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.309 05:00:37 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.309 05:00:37 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:23.309 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.310 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.311 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.312 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.313 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:23.314 05:00:37 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:23.314 05:00:37 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.314 05:00:37 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.314 05:00:37 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.314 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:23.315 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:23.316 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:23.317 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.318 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:23.580 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.581 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.582 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.583 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:23.584 05:00:37 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:23.584 05:00:37 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.584 05:00:37 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.584 05:00:37 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:23.584 05:00:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.584 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.585 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.586 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:23.587 05:00:38 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:23.587 05:00:38 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:23.587 05:00:38 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.725 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.725 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.725 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.725 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.725 05:00:39 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:24.725 05:00:39 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:24.725 05:00:39 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.725 05:00:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:24.725 ************************************ 00:11:24.725 START TEST nvme_flexible_data_placement 00:11:24.725 ************************************ 00:11:24.725 05:00:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:25.293 Initializing NVMe Controllers 00:11:25.293 Attaching to 0000:00:13.0 00:11:25.293 Controller supports FDP Attached to 0000:00:13.0 00:11:25.293 Namespace ID: 1 Endurance Group ID: 1 00:11:25.293 Initialization complete. 00:11:25.293 00:11:25.293 ================================== 00:11:25.293 == FDP tests for Namespace: #01 == 00:11:25.293 ================================== 00:11:25.293 00:11:25.293 Get Feature: FDP: 00:11:25.293 ================= 00:11:25.293 Enabled: Yes 00:11:25.293 FDP configuration Index: 0 00:11:25.293 00:11:25.293 FDP configurations log page 00:11:25.293 =========================== 00:11:25.293 Number of FDP configurations: 1 00:11:25.293 Version: 0 00:11:25.293 Size: 112 00:11:25.293 FDP Configuration Descriptor: 0 00:11:25.293 Descriptor Size: 96 00:11:25.293 Reclaim Group Identifier format: 2 00:11:25.293 FDP Volatile Write Cache: Not Present 00:11:25.293 FDP Configuration: Valid 00:11:25.293 Vendor Specific Size: 0 00:11:25.293 Number of Reclaim Groups: 2 00:11:25.293 Number of Recalim Unit Handles: 8 00:11:25.293 Max Placement Identifiers: 128 00:11:25.293 Number of Namespaces Suppprted: 256 00:11:25.293 Reclaim unit Nominal Size: 6000000 bytes 00:11:25.293 Estimated Reclaim Unit Time Limit: Not Reported 00:11:25.293 RUH Desc #000: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #001: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #002: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #003: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #004: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #005: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #006: RUH Type: Initially Isolated 00:11:25.293 RUH Desc #007: RUH Type: Initially Isolated 00:11:25.293 00:11:25.293 FDP reclaim unit handle usage log page 00:11:25.293 ====================================== 00:11:25.293 Number of Reclaim Unit Handles: 8 00:11:25.293 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:25.293 RUH Usage Desc #001: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #002: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #003: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #004: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #005: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #006: RUH Attributes: Unused 00:11:25.293 RUH Usage Desc #007: RUH Attributes: Unused 00:11:25.293 00:11:25.293 FDP statistics log page 00:11:25.293 ======================= 00:11:25.293 Host bytes with metadata written: 838127616 00:11:25.293 Media bytes with metadata written: 838230016 00:11:25.293 Media bytes erased: 0 00:11:25.293 00:11:25.293 FDP Reclaim unit handle status 00:11:25.293 ============================== 00:11:25.293 Number of RUHS descriptors: 2 00:11:25.293 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000040b3 00:11:25.293 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:25.293 00:11:25.293 FDP write on placement id: 0 success 00:11:25.293 00:11:25.293 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:25.293 00:11:25.293 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:25.293 00:11:25.293 Get Feature: FDP Events for Placement handle: #0 00:11:25.293 ======================== 00:11:25.293 Number of FDP Events: 6 00:11:25.293 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:25.293 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:25.293 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:25.293 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:25.293 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:25.293 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:25.293 00:11:25.293 FDP events log page 00:11:25.293 =================== 00:11:25.293 Number of FDP events: 1 00:11:25.293 FDP Event #0: 00:11:25.293 Event Type: RU Not Written to Capacity 00:11:25.293 Placement Identifier: Valid 00:11:25.293 NSID: Valid 00:11:25.293 Location: Valid 00:11:25.293 Placement Identifier: 0 00:11:25.293 Event Timestamp: 9 00:11:25.293 Namespace Identifier: 1 00:11:25.293 Reclaim Group Identifier: 0 00:11:25.293 Reclaim Unit Handle Identifier: 0 00:11:25.293 00:11:25.293 FDP test passed 00:11:25.293 00:11:25.293 real 0m0.291s 00:11:25.293 user 0m0.094s 00:11:25.293 sys 0m0.094s 00:11:25.293 05:00:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.293 05:00:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 ************************************ 00:11:25.293 END TEST nvme_flexible_data_placement 00:11:25.293 ************************************ 00:11:25.293 00:11:25.293 real 0m8.024s 00:11:25.293 user 0m1.296s 00:11:25.293 sys 0m1.703s 00:11:25.293 05:00:39 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.293 ************************************ 00:11:25.293 END TEST nvme_fdp 00:11:25.293 ************************************ 00:11:25.293 05:00:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 05:00:39 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:25.293 05:00:39 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:25.293 05:00:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:25.293 05:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.293 05:00:39 -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 ************************************ 00:11:25.293 START TEST nvme_rpc 00:11:25.293 ************************************ 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:25.293 * Looking for test storage... 00:11:25.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1522 -- # bdfs=() 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1522 -- # local bdfs 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1511 -- # bdfs=() 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1511 -- # local bdfs 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1513 -- # (( 4 == 0 )) 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72033 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:25.293 05:00:39 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72033 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72033 ']' 00:11:25.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.293 05:00:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.553 [2024-07-24 05:00:40.007666] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:25.553 [2024-07-24 05:00:40.007909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72033 ] 00:11:25.811 [2024-07-24 05:00:40.184636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:25.811 [2024-07-24 05:00:40.414047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.811 [2024-07-24 05:00:40.414055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.747 05:00:41 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.747 05:00:41 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:26.747 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:26.747 Nvme0n1 00:11:27.006 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:27.006 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:27.265 request: 00:11:27.265 { 00:11:27.265 "bdev_name": "Nvme0n1", 00:11:27.265 "filename": "non_existing_file", 00:11:27.265 "method": "bdev_nvme_apply_firmware", 00:11:27.265 "req_id": 1 00:11:27.265 } 00:11:27.265 Got JSON-RPC error response 00:11:27.265 response: 00:11:27.265 { 00:11:27.265 "code": -32603, 00:11:27.265 "message": "open file failed." 00:11:27.265 } 00:11:27.265 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:27.265 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:27.265 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:27.265 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:27.265 05:00:41 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72033 00:11:27.265 05:00:41 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72033 ']' 00:11:27.265 05:00:41 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72033 00:11:27.265 05:00:41 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:11:27.265 05:00:41 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.265 05:00:41 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72033 00:11:27.524 05:00:41 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.524 killing process with pid 72033 00:11:27.524 05:00:41 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.524 05:00:41 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72033' 00:11:27.524 05:00:41 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72033 00:11:27.524 05:00:41 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72033 00:11:29.428 00:11:29.428 real 0m3.910s 00:11:29.428 user 0m7.402s 00:11:29.428 sys 0m0.584s 00:11:29.428 ************************************ 00:11:29.428 END TEST nvme_rpc 00:11:29.428 ************************************ 00:11:29.428 05:00:43 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.428 05:00:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.428 05:00:43 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:29.428 05:00:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:29.428 05:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.428 05:00:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.428 ************************************ 00:11:29.428 START TEST nvme_rpc_timeouts 00:11:29.428 ************************************ 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:29.428 * Looking for test storage... 00:11:29.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72105 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72105 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72129 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:29.428 05:00:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72129 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72129 ']' 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.428 05:00:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:29.428 [2024-07-24 05:00:43.896087] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:29.428 [2024-07-24 05:00:43.896284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72129 ] 00:11:29.686 [2024-07-24 05:00:44.065601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:29.686 [2024-07-24 05:00:44.216875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.686 [2024-07-24 05:00:44.216893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.253 05:00:44 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.253 Checking default timeout settings: 00:11:30.253 05:00:44 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:11:30.253 05:00:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:30.253 05:00:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:30.820 Making settings changes with rpc: 00:11:30.820 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:30.820 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:30.820 Check default vs. modified settings: 00:11:30.820 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:30.820 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:31.387 Setting action_on_timeout is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:31.387 Setting timeout_us is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:31.387 Setting timeout_admin_us is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72105 /tmp/settings_modified_72105 00:11:31.387 05:00:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72129 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72129 ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72129 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72129 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:31.387 killing process with pid 72129 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72129' 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72129 00:11:31.387 05:00:45 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72129 00:11:33.288 RPC TIMEOUT SETTING TEST PASSED. 00:11:33.288 05:00:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:33.288 00:11:33.288 real 0m3.929s 00:11:33.288 user 0m7.503s 00:11:33.288 sys 0m0.542s 00:11:33.288 05:00:47 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.288 ************************************ 00:11:33.288 END TEST nvme_rpc_timeouts 00:11:33.288 ************************************ 00:11:33.288 05:00:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:33.288 05:00:47 -- spdk/autotest.sh@243 -- # uname -s 00:11:33.288 05:00:47 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:33.288 05:00:47 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:33.288 05:00:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.288 05:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.288 05:00:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.288 ************************************ 00:11:33.288 START TEST sw_hotplug 00:11:33.288 ************************************ 00:11:33.288 05:00:47 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:33.288 * Looking for test storage... 00:11:33.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.288 05:00:47 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.830 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.830 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.830 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.830 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:33.830 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:33.830 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:33.830 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:33.830 05:00:48 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:33.831 05:00:48 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:33.831 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:33.831 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:33.831 05:00:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.348 Waiting for block devices as requested 00:11:34.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.607 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.607 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.607 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.878 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:39.878 05:00:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:39.878 05:00:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:40.137 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:40.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.137 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:40.707 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:40.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72983 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:40.966 05:00:55 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.966 05:00:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:41.225 Initializing NVMe Controllers 00:11:41.225 Attaching to 0000:00:10.0 00:11:41.225 Attaching to 0000:00:11.0 00:11:41.225 Attached to 0000:00:10.0 00:11:41.225 Attached to 0000:00:11.0 00:11:41.225 Initialization complete. Starting I/O... 00:11:41.226 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:41.226 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:41.226 00:11:42.162 QEMU NVMe Ctrl (12340 ): 1289 I/Os completed (+1289) 00:11:42.162 QEMU NVMe Ctrl (12341 ): 1348 I/Os completed (+1348) 00:11:42.162 00:11:43.112 QEMU NVMe Ctrl (12340 ): 2990 I/Os completed (+1701) 00:11:43.112 QEMU NVMe Ctrl (12341 ): 3104 I/Os completed (+1756) 00:11:43.112 00:11:44.511 QEMU NVMe Ctrl (12340 ): 4982 I/Os completed (+1992) 00:11:44.511 QEMU NVMe Ctrl (12341 ): 5118 I/Os completed (+2014) 00:11:44.511 00:11:45.079 QEMU NVMe Ctrl (12340 ): 6982 I/Os completed (+2000) 00:11:45.079 QEMU NVMe Ctrl (12341 ): 7132 I/Os completed (+2014) 00:11:45.079 00:11:46.455 QEMU NVMe Ctrl (12340 ): 9010 I/Os completed (+2028) 00:11:46.455 QEMU NVMe Ctrl (12341 ): 9202 I/Os completed (+2070) 00:11:46.455 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.023 [2024-07-24 05:01:01.471508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:47.023 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:47.023 [2024-07-24 05:01:01.473442] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.473541] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.473568] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.473594] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:47.023 [2024-07-24 05:01:01.476757] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.476813] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.476872] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.476913] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.023 [2024-07-24 05:01:01.507452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:47.023 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:47.023 [2024-07-24 05:01:01.509119] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.509214] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.509258] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.509279] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:47.023 [2024-07-24 05:01:01.511714] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.511790] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.511817] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 [2024-07-24 05:01:01.511836] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.023 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.283 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.283 Attaching to 0000:00:10.0 00:11:47.283 Attached to 0000:00:10.0 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.283 05:01:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.283 Attaching to 0000:00:11.0 00:11:47.283 Attached to 0000:00:11.0 00:11:48.220 QEMU NVMe Ctrl (12340 ): 1988 I/Os completed (+1988) 00:11:48.220 QEMU NVMe Ctrl (12341 ): 1799 I/Os completed (+1799) 00:11:48.220 00:11:49.157 QEMU NVMe Ctrl (12340 ): 3884 I/Os completed (+1896) 00:11:49.157 QEMU NVMe Ctrl (12341 ): 3764 I/Os completed (+1965) 00:11:49.157 00:11:50.095 QEMU NVMe Ctrl (12340 ): 5885 I/Os completed (+2001) 00:11:50.095 QEMU NVMe Ctrl (12341 ): 5775 I/Os completed (+2011) 00:11:50.095 00:11:51.471 QEMU NVMe Ctrl (12340 ): 7897 I/Os completed (+2012) 00:11:51.471 QEMU NVMe Ctrl (12341 ): 7795 I/Os completed (+2020) 00:11:51.471 00:11:52.422 QEMU NVMe Ctrl (12340 ): 9889 I/Os completed (+1992) 00:11:52.422 QEMU NVMe Ctrl (12341 ): 9830 I/Os completed (+2035) 00:11:52.422 00:11:53.357 QEMU NVMe Ctrl (12340 ): 11889 I/Os completed (+2000) 00:11:53.357 QEMU NVMe Ctrl (12341 ): 11863 I/Os completed (+2033) 00:11:53.357 00:11:54.293 QEMU NVMe Ctrl (12340 ): 13794 I/Os completed (+1905) 00:11:54.293 QEMU NVMe Ctrl (12341 ): 13839 I/Os completed (+1976) 00:11:54.293 00:11:55.228 QEMU NVMe Ctrl (12340 ): 15794 I/Os completed (+2000) 00:11:55.228 QEMU NVMe Ctrl (12341 ): 15871 I/Os completed (+2032) 00:11:55.228 00:11:56.164 QEMU NVMe Ctrl (12340 ): 17570 I/Os completed (+1776) 00:11:56.164 QEMU NVMe Ctrl (12341 ): 17736 I/Os completed (+1865) 00:11:56.164 00:11:57.099 QEMU NVMe Ctrl (12340 ): 19542 I/Os completed (+1972) 00:11:57.099 QEMU NVMe Ctrl (12341 ): 19734 I/Os completed (+1998) 00:11:57.099 00:11:58.475 QEMU NVMe Ctrl (12340 ): 21438 I/Os completed (+1896) 00:11:58.475 QEMU NVMe Ctrl (12341 ): 21689 I/Os completed (+1955) 00:11:58.475 00:11:59.411 QEMU NVMe Ctrl (12340 ): 23442 I/Os completed (+2004) 00:11:59.411 QEMU NVMe Ctrl (12341 ): 23701 I/Os completed (+2012) 00:11:59.411 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.411 [2024-07-24 05:01:13.826368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.411 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:59.411 [2024-07-24 05:01:13.828289] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.828384] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.828413] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.828438] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:59.411 [2024-07-24 05:01:13.831246] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.831315] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.831340] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.831364] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.411 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.411 [2024-07-24 05:01:13.852925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:59.411 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:59.411 [2024-07-24 05:01:13.854737] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.411 [2024-07-24 05:01:13.854805] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 [2024-07-24 05:01:13.854856] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 [2024-07-24 05:01:13.854882] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:59.412 [2024-07-24 05:01:13.857474] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 [2024-07-24 05:01:13.857527] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 [2024-07-24 05:01:13.857555] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 [2024-07-24 05:01:13.857579] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.412 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:59.412 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:59.412 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:59.412 EAL: Scan for (pci) bus failed. 00:11:59.412 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.412 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.412 05:01:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.412 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.671 Attaching to 0000:00:10.0 00:11:59.671 Attached to 0000:00:10.0 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.671 05:01:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.671 Attaching to 0000:00:11.0 00:11:59.671 Attached to 0000:00:11.0 00:12:00.239 QEMU NVMe Ctrl (12340 ): 1236 I/Os completed (+1236) 00:12:00.239 QEMU NVMe Ctrl (12341 ): 1088 I/Os completed (+1088) 00:12:00.239 00:12:01.176 QEMU NVMe Ctrl (12340 ): 3199 I/Os completed (+1963) 00:12:01.176 QEMU NVMe Ctrl (12341 ): 3075 I/Os completed (+1987) 00:12:01.176 00:12:02.114 QEMU NVMe Ctrl (12340 ): 5083 I/Os completed (+1884) 00:12:02.114 QEMU NVMe Ctrl (12341 ): 5005 I/Os completed (+1930) 00:12:02.114 00:12:03.493 QEMU NVMe Ctrl (12340 ): 7023 I/Os completed (+1940) 00:12:03.493 QEMU NVMe Ctrl (12341 ): 6988 I/Os completed (+1983) 00:12:03.493 00:12:04.432 QEMU NVMe Ctrl (12340 ): 8998 I/Os completed (+1975) 00:12:04.432 QEMU NVMe Ctrl (12341 ): 8990 I/Os completed (+2002) 00:12:04.432 00:12:05.369 QEMU NVMe Ctrl (12340 ): 10958 I/Os completed (+1960) 00:12:05.369 QEMU NVMe Ctrl (12341 ): 10977 I/Os completed (+1987) 00:12:05.369 00:12:06.306 QEMU NVMe Ctrl (12340 ): 12886 I/Os completed (+1928) 00:12:06.306 QEMU NVMe Ctrl (12341 ): 12964 I/Os completed (+1987) 00:12:06.306 00:12:07.244 QEMU NVMe Ctrl (12340 ): 14899 I/Os completed (+2013) 00:12:07.244 QEMU NVMe Ctrl (12341 ): 15001 I/Os completed (+2037) 00:12:07.244 00:12:08.181 QEMU NVMe Ctrl (12340 ): 16798 I/Os completed (+1899) 00:12:08.181 QEMU NVMe Ctrl (12341 ): 16944 I/Os completed (+1943) 00:12:08.181 00:12:09.117 QEMU NVMe Ctrl (12340 ): 19099 I/Os completed (+2301) 00:12:09.117 QEMU NVMe Ctrl (12341 ): 19466 I/Os completed (+2522) 00:12:09.117 00:12:10.095 QEMU NVMe Ctrl (12340 ): 21055 I/Os completed (+1956) 00:12:10.095 QEMU NVMe Ctrl (12341 ): 21453 I/Os completed (+1987) 00:12:10.095 00:12:11.472 QEMU NVMe Ctrl (12340 ): 23035 I/Os completed (+1980) 00:12:11.472 QEMU NVMe Ctrl (12341 ): 23457 I/Os completed (+2004) 00:12:11.472 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.731 [2024-07-24 05:01:26.150373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:11.731 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:11.731 [2024-07-24 05:01:26.152306] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.152373] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.152402] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.152428] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:11.731 [2024-07-24 05:01:26.155199] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.155263] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.155288] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.155311] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.731 [2024-07-24 05:01:26.186422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:11.731 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:11.731 [2024-07-24 05:01:26.190336] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.190401] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.190447] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.190470] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:11.731 [2024-07-24 05:01:26.193083] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.193136] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.193174] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 [2024-07-24 05:01:26.193195] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.731 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:11.731 EAL: Scan for (pci) bus failed. 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.731 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.991 Attaching to 0000:00:10.0 00:12:11.991 Attached to 0000:00:10.0 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.991 05:01:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.991 Attaching to 0000:00:11.0 00:12:11.991 Attached to 0000:00:11.0 00:12:11.991 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:11.991 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:11.991 [2024-07-24 05:01:26.539321] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:24.195 05:01:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:24.195 05:01:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.195 05:01:38 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.06 00:12:24.195 05:01:38 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.06 00:12:24.195 05:01:38 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:24.195 05:01:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.06 00:12:24.195 05:01:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.06 2 00:12:24.195 remove_attach_helper took 43.06s to complete (handling 2 nvme drive(s)) 05:01:38 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72983 00:12:30.762 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72983) - No such process 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72983 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73519 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:30.762 05:01:44 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73519 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 73519 ']' 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.762 05:01:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.762 [2024-07-24 05:01:44.661449] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:30.762 [2024-07-24 05:01:44.661636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73519 ] 00:12:30.762 [2024-07-24 05:01:44.832184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.762 [2024-07-24 05:01:45.037527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:31.334 05:01:45 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:31.334 05:01:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.906 05:01:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.906 05:01:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.906 [2024-07-24 05:01:51.771599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:37.906 [2024-07-24 05:01:51.774146] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:51.774228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:51.774280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:51.774306] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:51.774325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:51.774339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:51.774355] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:51.774368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:51.774382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:51.774396] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:51.774412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:51.774425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 05:01:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:37.906 05:01:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:37.906 [2024-07-24 05:01:52.271659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:37.906 [2024-07-24 05:01:52.274267] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:52.274345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:52.274365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:52.274393] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:52.274407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:52.274421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:52.274434] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:52.274448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:52.274460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:52.274474] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.906 [2024-07-24 05:01:52.274486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.906 [2024-07-24 05:01:52.274499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.906 [2024-07-24 05:01:52.274516] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:37.906 [2024-07-24 05:01:52.274535] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:37.906 [2024-07-24 05:01:52.274562] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:37.906 [2024-07-24 05:01:52.274591] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.906 05:01:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.906 05:01:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.906 05:01:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.906 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:38.165 05:01:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.420 [2024-07-24 05:02:04.771736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:50.420 [2024-07-24 05:02:04.775243] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.420 [2024-07-24 05:02:04.775294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.420 [2024-07-24 05:02:04.775319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.420 [2024-07-24 05:02:04.775346] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.420 [2024-07-24 05:02:04.775393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.420 [2024-07-24 05:02:04.775421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.420 [2024-07-24 05:02:04.775437] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.420 [2024-07-24 05:02:04.775451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.420 [2024-07-24 05:02:04.775464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.420 [2024-07-24 05:02:04.775478] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.420 [2024-07-24 05:02:04.775492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.420 [2024-07-24 05:02:04.775504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 05:02:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:50.420 05:02:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.679 [2024-07-24 05:02:05.171725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:50.679 [2024-07-24 05:02:05.174595] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.679 [2024-07-24 05:02:05.174661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.679 [2024-07-24 05:02:05.174680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.679 [2024-07-24 05:02:05.174706] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.679 [2024-07-24 05:02:05.174721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.679 [2024-07-24 05:02:05.174735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.679 [2024-07-24 05:02:05.174750] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.679 [2024-07-24 05:02:05.174764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.679 [2024-07-24 05:02:05.174776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.679 [2024-07-24 05:02:05.174792] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.679 [2024-07-24 05:02:05.174804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.679 [2024-07-24 05:02:05.174834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.937 05:02:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.937 05:02:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.937 05:02:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.937 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.195 05:02:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.392 05:02:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.392 [2024-07-24 05:02:17.871923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:03.392 05:02:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:03.392 [2024-07-24 05:02:17.875198] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.392 [2024-07-24 05:02:17.875249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.392 [2024-07-24 05:02:17.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.392 [2024-07-24 05:02:17.875304] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.392 [2024-07-24 05:02:17.875321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.392 [2024-07-24 05:02:17.875365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.392 [2024-07-24 05:02:17.875410] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.392 [2024-07-24 05:02:17.875438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.392 [2024-07-24 05:02:17.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.392 [2024-07-24 05:02:17.875466] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.392 [2024-07-24 05:02:17.875481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.392 [2024-07-24 05:02:17.875493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.651 [2024-07-24 05:02:18.271895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:03.651 [2024-07-24 05:02:18.274555] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.651 [2024-07-24 05:02:18.274621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.651 [2024-07-24 05:02:18.274640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.651 [2024-07-24 05:02:18.274668] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.651 [2024-07-24 05:02:18.274683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.651 [2024-07-24 05:02:18.274703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.651 [2024-07-24 05:02:18.274718] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.651 [2024-07-24 05:02:18.274735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.651 [2024-07-24 05:02:18.274747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.651 [2024-07-24 05:02:18.274764] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.651 [2024-07-24 05:02:18.274776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.651 [2024-07-24 05:02:18.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.910 05:02:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.910 05:02:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.910 05:02:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.910 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.169 05:02:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.13 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.13 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:13:16.375 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:16.375 05:02:30 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:16.375 05:02:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.939 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.939 05:02:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.939 05:02:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.939 [2024-07-24 05:02:36.934792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:22.939 [2024-07-24 05:02:36.936919] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:36.937028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:36.937058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:36.937114] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:36.937142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:36.937157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:36.937187] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:36.937201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:36.937219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:36.937234] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:36.937249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:36.937263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 05:02:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.940 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:22.940 05:02:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:22.940 [2024-07-24 05:02:37.434779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:22.940 [2024-07-24 05:02:37.436682] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:37.436758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:37.436778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:37.436806] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:37.436820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:37.436834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:37.436848] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:37.436877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:37.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 [2024-07-24 05:02:37.436907] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.940 [2024-07-24 05:02:37.436920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.940 [2024-07-24 05:02:37.436937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.940 05:02:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.940 05:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.940 05:02:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:22.940 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.199 05:02:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.414 05:02:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.414 05:02:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.414 05:02:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.414 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.415 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.415 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.415 05:02:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.415 05:02:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.415 [2024-07-24 05:02:49.934896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:35.415 [2024-07-24 05:02:49.936677] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.415 [2024-07-24 05:02:49.936725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.415 [2024-07-24 05:02:49.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.415 [2024-07-24 05:02:49.936777] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.415 [2024-07-24 05:02:49.936795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.415 [2024-07-24 05:02:49.936810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.415 [2024-07-24 05:02:49.936827] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.415 [2024-07-24 05:02:49.936857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.415 [2024-07-24 05:02:49.936876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.415 [2024-07-24 05:02:49.936892] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.415 [2024-07-24 05:02:49.936909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.415 [2024-07-24 05:02:49.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.415 05:02:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.415 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:35.415 05:02:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:35.982 [2024-07-24 05:02:50.334891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:35.982 [2024-07-24 05:02:50.336554] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.982 [2024-07-24 05:02:50.336633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.982 [2024-07-24 05:02:50.336653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.982 [2024-07-24 05:02:50.336678] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.982 [2024-07-24 05:02:50.336692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.982 [2024-07-24 05:02:50.336706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.982 [2024-07-24 05:02:50.336720] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.982 [2024-07-24 05:02:50.336733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.982 [2024-07-24 05:02:50.336746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.982 [2024-07-24 05:02:50.336760] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.982 [2024-07-24 05:02:50.336772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.982 [2024-07-24 05:02:50.336785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.982 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:35.982 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.982 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.982 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.983 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.983 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.983 05:02:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.983 05:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.983 05:02:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.983 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:35.983 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:36.241 05:02:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.442 05:03:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.442 05:03:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.442 05:03:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.442 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.442 05:03:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.442 05:03:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.442 [2024-07-24 05:03:02.935120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:48.442 [2024-07-24 05:03:02.936953] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.442 [2024-07-24 05:03:02.937001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.442 [2024-07-24 05:03:02.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.442 [2024-07-24 05:03:02.937053] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.442 [2024-07-24 05:03:02.937077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.443 [2024-07-24 05:03:02.937091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.443 [2024-07-24 05:03:02.937109] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.443 [2024-07-24 05:03:02.937123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.443 [2024-07-24 05:03:02.937139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.443 [2024-07-24 05:03:02.937154] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.443 [2024-07-24 05:03:02.937170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.443 [2024-07-24 05:03:02.937183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.443 05:03:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.443 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:48.443 05:03:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.715 [2024-07-24 05:03:03.335159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:48.985 [2024-07-24 05:03:03.336973] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.985 [2024-07-24 05:03:03.337025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.985 [2024-07-24 05:03:03.337047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.985 [2024-07-24 05:03:03.337075] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.985 [2024-07-24 05:03:03.337091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.985 [2024-07-24 05:03:03.337108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.985 [2024-07-24 05:03:03.337124] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.985 [2024-07-24 05:03:03.337143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.985 [2024-07-24 05:03:03.337157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.985 [2024-07-24 05:03:03.337174] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.985 [2024-07-24 05:03:03.337188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.985 [2024-07-24 05:03:03.337204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.985 05:03:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.985 05:03:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.985 05:03:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:48.985 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.243 05:03:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.05 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.05 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:14:01.448 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:01.448 05:03:15 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73519 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 73519 ']' 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 73519 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73519 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:01.448 killing process with pid 73519 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73519' 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@967 -- # kill 73519 00:14:01.448 05:03:15 sw_hotplug -- common/autotest_common.sh@972 -- # wait 73519 00:14:03.352 05:03:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:03.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:04.180 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.180 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.180 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.180 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.180 00:14:04.180 real 2m31.041s 00:14:04.180 user 1m52.300s 00:14:04.180 sys 0m18.575s 00:14:04.180 05:03:18 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.180 ************************************ 00:14:04.180 05:03:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.180 END TEST sw_hotplug 00:14:04.180 ************************************ 00:14:04.180 05:03:18 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:14:04.180 05:03:18 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:04.180 05:03:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:04.180 05:03:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.180 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:14:04.180 ************************************ 00:14:04.180 START TEST nvme_xnvme 00:14:04.180 ************************************ 00:14:04.180 05:03:18 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:04.441 * Looking for test storage... 00:14:04.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:04.441 05:03:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.441 05:03:18 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.441 05:03:18 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.441 05:03:18 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.441 05:03:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.441 05:03:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.441 05:03:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.441 05:03:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:04.441 05:03:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.441 05:03:18 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:04.441 05:03:18 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:04.441 05:03:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.441 05:03:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.441 ************************************ 00:14:04.441 START TEST xnvme_to_malloc_dd_copy 00:14:04.441 ************************************ 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:04.441 05:03:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:04.441 { 00:14:04.441 "subsystems": [ 00:14:04.441 { 00:14:04.441 "subsystem": "bdev", 00:14:04.441 "config": [ 00:14:04.441 { 00:14:04.441 "params": { 00:14:04.441 "block_size": 512, 00:14:04.441 "num_blocks": 2097152, 00:14:04.441 "name": "malloc0" 00:14:04.441 }, 00:14:04.441 "method": "bdev_malloc_create" 00:14:04.441 }, 00:14:04.441 { 00:14:04.441 "params": { 00:14:04.441 "io_mechanism": "libaio", 00:14:04.441 "filename": "/dev/nullb0", 00:14:04.441 "name": "null0" 00:14:04.441 }, 00:14:04.441 "method": "bdev_xnvme_create" 00:14:04.441 }, 00:14:04.441 { 00:14:04.441 "method": "bdev_wait_for_examine" 00:14:04.441 } 00:14:04.441 ] 00:14:04.441 } 00:14:04.441 ] 00:14:04.441 } 00:14:04.441 [2024-07-24 05:03:18.986755] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:04.442 [2024-07-24 05:03:18.986955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74864 ] 00:14:04.710 [2024-07-24 05:03:19.158119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.969 [2024-07-24 05:03:19.385213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.346  Copying: 202/1024 [MB] (202 MBps) Copying: 406/1024 [MB] (204 MBps) Copying: 610/1024 [MB] (203 MBps) Copying: 811/1024 [MB] (201 MBps) Copying: 1014/1024 [MB] (202 MBps) Copying: 1024/1024 [MB] (average 203 MBps) 00:14:14.346 00:14:14.346 05:03:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:14.346 05:03:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:14.346 05:03:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:14.346 05:03:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:14.346 { 00:14:14.346 "subsystems": [ 00:14:14.346 { 00:14:14.346 "subsystem": "bdev", 00:14:14.346 "config": [ 00:14:14.346 { 00:14:14.346 "params": { 00:14:14.346 "block_size": 512, 00:14:14.346 "num_blocks": 2097152, 00:14:14.346 "name": "malloc0" 00:14:14.346 }, 00:14:14.346 "method": "bdev_malloc_create" 00:14:14.346 }, 00:14:14.346 { 00:14:14.346 "params": { 00:14:14.346 "io_mechanism": "libaio", 00:14:14.346 "filename": "/dev/nullb0", 00:14:14.346 "name": "null0" 00:14:14.346 }, 00:14:14.346 "method": "bdev_xnvme_create" 00:14:14.346 }, 00:14:14.346 { 00:14:14.346 "method": "bdev_wait_for_examine" 00:14:14.346 } 00:14:14.346 ] 00:14:14.346 } 00:14:14.346 ] 00:14:14.346 } 00:14:14.346 [2024-07-24 05:03:28.551565] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:14.346 [2024-07-24 05:03:28.551734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74979 ] 00:14:14.346 [2024-07-24 05:03:28.721647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.346 [2024-07-24 05:03:28.880273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.008  Copying: 195/1024 [MB] (195 MBps) Copying: 387/1024 [MB] (192 MBps) Copying: 585/1024 [MB] (197 MBps) Copying: 780/1024 [MB] (195 MBps) Copying: 978/1024 [MB] (198 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:14:24.008 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:24.008 05:03:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:24.008 { 00:14:24.008 "subsystems": [ 00:14:24.008 { 00:14:24.008 "subsystem": "bdev", 00:14:24.008 "config": [ 00:14:24.008 { 00:14:24.008 "params": { 00:14:24.008 "block_size": 512, 00:14:24.008 "num_blocks": 2097152, 00:14:24.008 "name": "malloc0" 00:14:24.008 }, 00:14:24.008 "method": "bdev_malloc_create" 00:14:24.008 }, 00:14:24.008 { 00:14:24.008 "params": { 00:14:24.008 "io_mechanism": "io_uring", 00:14:24.008 "filename": "/dev/nullb0", 00:14:24.008 "name": "null0" 00:14:24.008 }, 00:14:24.008 "method": "bdev_xnvme_create" 00:14:24.008 }, 00:14:24.008 { 00:14:24.008 "method": "bdev_wait_for_examine" 00:14:24.008 } 00:14:24.008 ] 00:14:24.008 } 00:14:24.008 ] 00:14:24.008 } 00:14:24.008 [2024-07-24 05:03:38.339667] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:24.008 [2024-07-24 05:03:38.339880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75083 ] 00:14:24.008 [2024-07-24 05:03:38.506919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.267 [2024-07-24 05:03:38.672374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.303  Copying: 200/1024 [MB] (200 MBps) Copying: 415/1024 [MB] (215 MBps) Copying: 625/1024 [MB] (209 MBps) Copying: 835/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:14:33.303 00:14:33.303 05:03:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:33.303 05:03:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:33.303 05:03:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:33.303 05:03:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:33.303 { 00:14:33.303 "subsystems": [ 00:14:33.303 { 00:14:33.303 "subsystem": "bdev", 00:14:33.303 "config": [ 00:14:33.303 { 00:14:33.303 "params": { 00:14:33.303 "block_size": 512, 00:14:33.303 "num_blocks": 2097152, 00:14:33.303 "name": "malloc0" 00:14:33.303 }, 00:14:33.303 "method": "bdev_malloc_create" 00:14:33.303 }, 00:14:33.303 { 00:14:33.303 "params": { 00:14:33.303 "io_mechanism": "io_uring", 00:14:33.303 "filename": "/dev/nullb0", 00:14:33.303 "name": "null0" 00:14:33.303 }, 00:14:33.303 "method": "bdev_xnvme_create" 00:14:33.303 }, 00:14:33.303 { 00:14:33.303 "method": "bdev_wait_for_examine" 00:14:33.304 } 00:14:33.304 ] 00:14:33.304 } 00:14:33.304 ] 00:14:33.304 } 00:14:33.304 [2024-07-24 05:03:47.708679] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:33.304 [2024-07-24 05:03:47.708862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75193 ] 00:14:33.304 [2024-07-24 05:03:47.879220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.562 [2024-07-24 05:03:48.039984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.750  Copying: 205/1024 [MB] (205 MBps) Copying: 407/1024 [MB] (202 MBps) Copying: 610/1024 [MB] (202 MBps) Copying: 815/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:14:42.750 00:14:42.750 05:03:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:42.750 05:03:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:42.750 00:14:42.750 real 0m38.260s 00:14:42.750 user 0m33.341s 00:14:42.750 sys 0m4.382s 00:14:42.750 05:03:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.750 05:03:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:42.750 ************************************ 00:14:42.750 END TEST xnvme_to_malloc_dd_copy 00:14:42.750 ************************************ 00:14:42.750 05:03:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:42.750 05:03:57 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:42.750 05:03:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.750 05:03:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.750 ************************************ 00:14:42.750 START TEST xnvme_bdevperf 00:14:42.750 ************************************ 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.750 05:03:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.750 { 00:14:42.750 "subsystems": [ 00:14:42.750 { 00:14:42.750 "subsystem": "bdev", 00:14:42.750 "config": [ 00:14:42.750 { 00:14:42.750 "params": { 00:14:42.750 "io_mechanism": "libaio", 00:14:42.750 "filename": "/dev/nullb0", 00:14:42.750 "name": "null0" 00:14:42.750 }, 00:14:42.750 "method": "bdev_xnvme_create" 00:14:42.750 }, 00:14:42.750 { 00:14:42.750 "method": "bdev_wait_for_examine" 00:14:42.750 } 00:14:42.750 ] 00:14:42.750 } 00:14:42.750 ] 00:14:42.750 } 00:14:42.750 [2024-07-24 05:03:57.298709] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:42.750 [2024-07-24 05:03:57.298916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75325 ] 00:14:43.008 [2024-07-24 05:03:57.469990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.008 [2024-07-24 05:03:57.635921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.574 Running I/O for 5 seconds... 00:14:48.845 00:14:48.845 Latency(us) 00:14:48.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.845 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:48.845 null0 : 5.00 125361.47 489.69 0.00 0.00 507.46 172.22 1050.07 00:14:48.845 =================================================================================================================== 00:14:48.845 Total : 125361.47 489.69 0.00 0.00 507.46 172.22 1050.07 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.410 05:04:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.410 { 00:14:49.410 "subsystems": [ 00:14:49.410 { 00:14:49.410 "subsystem": "bdev", 00:14:49.410 "config": [ 00:14:49.410 { 00:14:49.410 "params": { 00:14:49.410 "io_mechanism": "io_uring", 00:14:49.410 "filename": "/dev/nullb0", 00:14:49.410 "name": "null0" 00:14:49.410 }, 00:14:49.410 "method": "bdev_xnvme_create" 00:14:49.410 }, 00:14:49.410 { 00:14:49.410 "method": "bdev_wait_for_examine" 00:14:49.410 } 00:14:49.410 ] 00:14:49.410 } 00:14:49.410 ] 00:14:49.410 } 00:14:49.410 [2024-07-24 05:04:03.951306] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:49.410 [2024-07-24 05:04:03.951486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75401 ] 00:14:49.667 [2024-07-24 05:04:04.110045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.667 [2024-07-24 05:04:04.263640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.925 Running I/O for 5 seconds... 00:14:55.191 00:14:55.191 Latency(us) 00:14:55.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.191 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:55.191 null0 : 5.00 174425.39 681.35 0.00 0.00 363.99 203.87 636.74 00:14:55.191 =================================================================================================================== 00:14:55.191 Total : 174425.39 681.35 0.00 0.00 363.99 203.87 636.74 00:14:56.128 05:04:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:56.128 05:04:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:56.128 00:14:56.128 real 0m13.360s 00:14:56.128 user 0m10.331s 00:14:56.128 sys 0m2.808s 00:14:56.128 05:04:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.128 05:04:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.128 ************************************ 00:14:56.128 END TEST xnvme_bdevperf 00:14:56.128 ************************************ 00:14:56.128 ************************************ 00:14:56.128 END TEST nvme_xnvme 00:14:56.128 ************************************ 00:14:56.128 00:14:56.128 real 0m51.807s 00:14:56.128 user 0m43.736s 00:14:56.128 sys 0m7.308s 00:14:56.128 05:04:10 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.128 05:04:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.128 05:04:10 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:56.128 05:04:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.128 05:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.128 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.128 ************************************ 00:14:56.128 START TEST blockdev_xnvme 00:14:56.128 ************************************ 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:56.128 * Looking for test storage... 00:14:56.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75535 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75535 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 75535 ']' 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.128 05:04:10 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.128 05:04:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.387 [2024-07-24 05:04:10.852523] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:56.387 [2024-07-24 05:04:10.852700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75535 ] 00:14:56.647 [2024-07-24 05:04:11.032375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.647 [2024-07-24 05:04:11.193526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.216 05:04:11 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.216 05:04:11 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:14:57.216 05:04:11 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:57.216 05:04:11 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:57.216 05:04:11 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:57.216 05:04:11 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:57.216 05:04:11 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:57.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:57.783 Waiting for block devices as requested 00:14:57.783 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:58.041 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:58.041 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:58.299 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:03.572 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1668 -- # local nvme bdf 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme2n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme2n2 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme2n3 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme3c3n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1660 -- # local device=nvme3n1 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:03.572 05:04:17 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:03.572 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:03.573 nvme0n1 00:15:03.573 nvme1n1 00:15:03.573 nvme2n1 00:15:03.573 nvme2n2 00:15:03.573 nvme2n3 00:15:03.573 nvme3n1 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9c30eb74-694b-4288-8ee0-2db744f9c0b2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9c30eb74-694b-4288-8ee0-2db744f9c0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6d11cd99-341a-4762-be09-1e93f2d99508"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d11cd99-341a-4762-be09-1e93f2d99508",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "009bc1d4-54c1-41e5-b848-4fe57211f1b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "009bc1d4-54c1-41e5-b848-4fe57211f1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "96506e05-296d-4f49-bff0-200b8d45c415"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96506e05-296d-4f49-bff0-200b8d45c415",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "43829a63-fe06-46e1-906c-8d335fae05ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43829a63-fe06-46e1-906c-8d335fae05ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "159ffcf7-39a9-490f-bf80-b3eb71adc3e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "159ffcf7-39a9-490f-bf80-b3eb71adc3e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:03.573 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 75535 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 75535 ']' 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 75535 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.573 05:04:17 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75535 00:15:03.573 05:04:18 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.573 05:04:18 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.573 killing process with pid 75535 00:15:03.573 05:04:18 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75535' 00:15:03.573 05:04:18 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 75535 00:15:03.573 05:04:18 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 75535 00:15:05.477 05:04:19 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:05.477 05:04:19 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.477 05:04:19 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:05.477 05:04:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.477 05:04:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.477 ************************************ 00:15:05.477 START TEST bdev_hello_world 00:15:05.477 ************************************ 00:15:05.477 05:04:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.477 [2024-07-24 05:04:19.926824] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:05.477 [2024-07-24 05:04:19.927044] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75903 ] 00:15:05.477 [2024-07-24 05:04:20.098368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.736 [2024-07-24 05:04:20.264646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.995 [2024-07-24 05:04:20.605746] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:05.995 [2024-07-24 05:04:20.605816] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:05.995 [2024-07-24 05:04:20.605867] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:05.995 [2024-07-24 05:04:20.607969] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:05.995 [2024-07-24 05:04:20.608384] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:05.995 [2024-07-24 05:04:20.608421] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:05.995 [2024-07-24 05:04:20.608662] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:05.995 00:15:05.995 [2024-07-24 05:04:20.608707] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:07.394 00:15:07.394 real 0m1.745s 00:15:07.394 user 0m1.451s 00:15:07.394 sys 0m0.181s 00:15:07.394 05:04:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.394 05:04:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 ************************************ 00:15:07.394 END TEST bdev_hello_world 00:15:07.394 ************************************ 00:15:07.394 05:04:21 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:07.394 05:04:21 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:07.394 05:04:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.394 05:04:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 ************************************ 00:15:07.394 START TEST bdev_bounds 00:15:07.394 ************************************ 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75945 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:07.394 Process bdevio pid: 75945 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75945' 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75945 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 75945 ']' 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.394 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 [2024-07-24 05:04:21.709280] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:07.394 [2024-07-24 05:04:21.709446] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75945 ] 00:15:07.394 [2024-07-24 05:04:21.864982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:07.690 [2024-07-24 05:04:22.033214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.690 [2024-07-24 05:04:22.033400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.690 [2024-07-24 05:04:22.033423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.258 05:04:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.258 05:04:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:15:08.258 05:04:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:08.258 I/O targets: 00:15:08.258 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:08.258 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:08.258 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:08.258 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:08.258 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:08.258 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:08.258 00:15:08.258 00:15:08.258 CUnit - A unit testing framework for C - Version 2.1-3 00:15:08.258 http://cunit.sourceforge.net/ 00:15:08.258 00:15:08.258 00:15:08.258 Suite: bdevio tests on: nvme3n1 00:15:08.258 Test: blockdev write read block ...passed 00:15:08.258 Test: blockdev write zeroes read block ...passed 00:15:08.258 Test: blockdev write zeroes read no split ...passed 00:15:08.258 Test: blockdev write zeroes read split ...passed 00:15:08.258 Test: blockdev write zeroes read split partial ...passed 00:15:08.258 Test: blockdev reset ...passed 00:15:08.258 Test: blockdev write read 8 blocks ...passed 00:15:08.258 Test: blockdev write read size > 128k ...passed 00:15:08.258 Test: blockdev write read invalid size ...passed 00:15:08.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.258 Test: blockdev write read max offset ...passed 00:15:08.258 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.258 Test: blockdev writev readv 8 blocks ...passed 00:15:08.258 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.258 Test: blockdev writev readv block ...passed 00:15:08.258 Test: blockdev writev readv size > 128k ...passed 00:15:08.258 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.258 Test: blockdev comparev and writev ...passed 00:15:08.258 Test: blockdev nvme passthru rw ...passed 00:15:08.258 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.258 Test: blockdev nvme admin passthru ...passed 00:15:08.258 Test: blockdev copy ...passed 00:15:08.258 Suite: bdevio tests on: nvme2n3 00:15:08.258 Test: blockdev write read block ...passed 00:15:08.258 Test: blockdev write zeroes read block ...passed 00:15:08.258 Test: blockdev write zeroes read no split ...passed 00:15:08.258 Test: blockdev write zeroes read split ...passed 00:15:08.517 Test: blockdev write zeroes read split partial ...passed 00:15:08.517 Test: blockdev reset ...passed 00:15:08.517 Test: blockdev write read 8 blocks ...passed 00:15:08.517 Test: blockdev write read size > 128k ...passed 00:15:08.517 Test: blockdev write read invalid size ...passed 00:15:08.517 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.517 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.517 Test: blockdev write read max offset ...passed 00:15:08.517 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.517 Test: blockdev writev readv 8 blocks ...passed 00:15:08.517 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.517 Test: blockdev writev readv block ...passed 00:15:08.517 Test: blockdev writev readv size > 128k ...passed 00:15:08.517 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.517 Test: blockdev comparev and writev ...passed 00:15:08.517 Test: blockdev nvme passthru rw ...passed 00:15:08.517 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.517 Test: blockdev nvme admin passthru ...passed 00:15:08.517 Test: blockdev copy ...passed 00:15:08.517 Suite: bdevio tests on: nvme2n2 00:15:08.517 Test: blockdev write read block ...passed 00:15:08.517 Test: blockdev write zeroes read block ...passed 00:15:08.517 Test: blockdev write zeroes read no split ...passed 00:15:08.517 Test: blockdev write zeroes read split ...passed 00:15:08.517 Test: blockdev write zeroes read split partial ...passed 00:15:08.517 Test: blockdev reset ...passed 00:15:08.517 Test: blockdev write read 8 blocks ...passed 00:15:08.517 Test: blockdev write read size > 128k ...passed 00:15:08.518 Test: blockdev write read invalid size ...passed 00:15:08.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.518 Test: blockdev write read max offset ...passed 00:15:08.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.518 Test: blockdev writev readv 8 blocks ...passed 00:15:08.518 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.518 Test: blockdev writev readv block ...passed 00:15:08.518 Test: blockdev writev readv size > 128k ...passed 00:15:08.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.518 Test: blockdev comparev and writev ...passed 00:15:08.518 Test: blockdev nvme passthru rw ...passed 00:15:08.518 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.518 Test: blockdev nvme admin passthru ...passed 00:15:08.518 Test: blockdev copy ...passed 00:15:08.518 Suite: bdevio tests on: nvme2n1 00:15:08.518 Test: blockdev write read block ...passed 00:15:08.518 Test: blockdev write zeroes read block ...passed 00:15:08.518 Test: blockdev write zeroes read no split ...passed 00:15:08.518 Test: blockdev write zeroes read split ...passed 00:15:08.518 Test: blockdev write zeroes read split partial ...passed 00:15:08.518 Test: blockdev reset ...passed 00:15:08.518 Test: blockdev write read 8 blocks ...passed 00:15:08.518 Test: blockdev write read size > 128k ...passed 00:15:08.518 Test: blockdev write read invalid size ...passed 00:15:08.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.518 Test: blockdev write read max offset ...passed 00:15:08.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.518 Test: blockdev writev readv 8 blocks ...passed 00:15:08.518 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.518 Test: blockdev writev readv block ...passed 00:15:08.518 Test: blockdev writev readv size > 128k ...passed 00:15:08.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.518 Test: blockdev comparev and writev ...passed 00:15:08.518 Test: blockdev nvme passthru rw ...passed 00:15:08.518 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.518 Test: blockdev nvme admin passthru ...passed 00:15:08.518 Test: blockdev copy ...passed 00:15:08.518 Suite: bdevio tests on: nvme1n1 00:15:08.518 Test: blockdev write read block ...passed 00:15:08.518 Test: blockdev write zeroes read block ...passed 00:15:08.518 Test: blockdev write zeroes read no split ...passed 00:15:08.518 Test: blockdev write zeroes read split ...passed 00:15:08.518 Test: blockdev write zeroes read split partial ...passed 00:15:08.518 Test: blockdev reset ...passed 00:15:08.518 Test: blockdev write read 8 blocks ...passed 00:15:08.518 Test: blockdev write read size > 128k ...passed 00:15:08.518 Test: blockdev write read invalid size ...passed 00:15:08.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.518 Test: blockdev write read max offset ...passed 00:15:08.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.518 Test: blockdev writev readv 8 blocks ...passed 00:15:08.518 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.518 Test: blockdev writev readv block ...passed 00:15:08.518 Test: blockdev writev readv size > 128k ...passed 00:15:08.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.518 Test: blockdev comparev and writev ...passed 00:15:08.518 Test: blockdev nvme passthru rw ...passed 00:15:08.518 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.518 Test: blockdev nvme admin passthru ...passed 00:15:08.518 Test: blockdev copy ...passed 00:15:08.518 Suite: bdevio tests on: nvme0n1 00:15:08.518 Test: blockdev write read block ...passed 00:15:08.518 Test: blockdev write zeroes read block ...passed 00:15:08.518 Test: blockdev write zeroes read no split ...passed 00:15:08.777 Test: blockdev write zeroes read split ...passed 00:15:08.777 Test: blockdev write zeroes read split partial ...passed 00:15:08.777 Test: blockdev reset ...passed 00:15:08.777 Test: blockdev write read 8 blocks ...passed 00:15:08.777 Test: blockdev write read size > 128k ...passed 00:15:08.777 Test: blockdev write read invalid size ...passed 00:15:08.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.777 Test: blockdev write read max offset ...passed 00:15:08.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.777 Test: blockdev writev readv 8 blocks ...passed 00:15:08.777 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.777 Test: blockdev writev readv block ...passed 00:15:08.777 Test: blockdev writev readv size > 128k ...passed 00:15:08.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.777 Test: blockdev comparev and writev ...passed 00:15:08.777 Test: blockdev nvme passthru rw ...passed 00:15:08.777 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.777 Test: blockdev nvme admin passthru ...passed 00:15:08.777 Test: blockdev copy ...passed 00:15:08.777 00:15:08.777 Run Summary: Type Total Ran Passed Failed Inactive 00:15:08.777 suites 6 6 n/a 0 0 00:15:08.777 tests 138 138 138 0 0 00:15:08.777 asserts 780 780 780 0 n/a 00:15:08.777 00:15:08.777 Elapsed time = 1.165 seconds 00:15:08.777 0 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75945 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 75945 ']' 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 75945 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75945 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75945' 00:15:08.777 killing process with pid 75945 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 75945 00:15:08.777 05:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 75945 00:15:09.712 05:04:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:09.712 00:15:09.712 real 0m2.588s 00:15:09.712 user 0m6.304s 00:15:09.712 sys 0m0.328s 00:15:09.712 05:04:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.712 ************************************ 00:15:09.712 05:04:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:09.712 END TEST bdev_bounds 00:15:09.712 ************************************ 00:15:09.712 05:04:24 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:09.712 05:04:24 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:09.712 05:04:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.712 05:04:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.712 ************************************ 00:15:09.712 START TEST bdev_nbd 00:15:09.712 ************************************ 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=76005 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 76005 /var/tmp/spdk-nbd.sock 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76005 ']' 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.713 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:09.972 [2024-07-24 05:04:24.375791] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:09.972 [2024-07-24 05:04:24.376690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.972 [2024-07-24 05:04:24.551096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.232 [2024-07-24 05:04:24.713695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.800 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.059 1+0 records in 00:15:11.059 1+0 records out 00:15:11.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451809 s, 9.1 MB/s 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:11.059 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.318 1+0 records in 00:15:11.318 1+0 records out 00:15:11.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573923 s, 7.1 MB/s 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:11.318 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.577 1+0 records in 00:15:11.577 1+0 records out 00:15:11.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504632 s, 8.1 MB/s 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:11.577 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:11.836 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.837 1+0 records in 00:15:11.837 1+0 records out 00:15:11.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484425 s, 8.5 MB/s 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:11.837 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.103 1+0 records in 00:15:12.103 1+0 records out 00:15:12.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714251 s, 5.7 MB/s 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.103 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.361 1+0 records in 00:15:12.361 1+0 records out 00:15:12.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705236 s, 5.8 MB/s 00:15:12.361 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:12.362 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd0", 00:15:12.620 "bdev_name": "nvme0n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd1", 00:15:12.620 "bdev_name": "nvme1n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd2", 00:15:12.620 "bdev_name": "nvme2n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd3", 00:15:12.620 "bdev_name": "nvme2n2" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd4", 00:15:12.620 "bdev_name": "nvme2n3" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd5", 00:15:12.620 "bdev_name": "nvme3n1" 00:15:12.620 } 00:15:12.620 ]' 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd0", 00:15:12.620 "bdev_name": "nvme0n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd1", 00:15:12.620 "bdev_name": "nvme1n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd2", 00:15:12.620 "bdev_name": "nvme2n1" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd3", 00:15:12.620 "bdev_name": "nvme2n2" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd4", 00:15:12.620 "bdev_name": "nvme2n3" 00:15:12.620 }, 00:15:12.620 { 00:15:12.620 "nbd_device": "/dev/nbd5", 00:15:12.620 "bdev_name": "nvme3n1" 00:15:12.620 } 00:15:12.620 ]' 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.620 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.879 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.138 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.397 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.656 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.915 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.174 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.433 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:14.692 /dev/nbd0 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.692 1+0 records in 00:15:14.692 1+0 records out 00:15:14.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490798 s, 8.3 MB/s 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.692 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:14.951 /dev/nbd1 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.951 1+0 records in 00:15:14.951 1+0 records out 00:15:14.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574822 s, 7.1 MB/s 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.951 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:15.210 /dev/nbd10 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.210 1+0 records in 00:15:15.210 1+0 records out 00:15:15.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544451 s, 7.5 MB/s 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:15.210 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.211 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.211 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:15.470 /dev/nbd11 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.470 1+0 records in 00:15:15.470 1+0 records out 00:15:15.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564833 s, 7.3 MB/s 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.470 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:15.749 /dev/nbd12 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.749 1+0 records in 00:15:15.749 1+0 records out 00:15:15.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644464 s, 6.4 MB/s 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.749 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.750 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:16.028 /dev/nbd13 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.287 1+0 records in 00:15:16.287 1+0 records out 00:15:16.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937974 s, 4.4 MB/s 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.287 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:16.547 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd0", 00:15:16.547 "bdev_name": "nvme0n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd1", 00:15:16.547 "bdev_name": "nvme1n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd10", 00:15:16.547 "bdev_name": "nvme2n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd11", 00:15:16.547 "bdev_name": "nvme2n2" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd12", 00:15:16.547 "bdev_name": "nvme2n3" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd13", 00:15:16.547 "bdev_name": "nvme3n1" 00:15:16.547 } 00:15:16.547 ]' 00:15:16.547 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd0", 00:15:16.547 "bdev_name": "nvme0n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd1", 00:15:16.547 "bdev_name": "nvme1n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd10", 00:15:16.547 "bdev_name": "nvme2n1" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd11", 00:15:16.547 "bdev_name": "nvme2n2" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd12", 00:15:16.547 "bdev_name": "nvme2n3" 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "nbd_device": "/dev/nbd13", 00:15:16.547 "bdev_name": "nvme3n1" 00:15:16.547 } 00:15:16.547 ]' 00:15:16.547 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:16.547 /dev/nbd1 00:15:16.547 /dev/nbd10 00:15:16.547 /dev/nbd11 00:15:16.547 /dev/nbd12 00:15:16.547 /dev/nbd13' 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:16.547 /dev/nbd1 00:15:16.547 /dev/nbd10 00:15:16.547 /dev/nbd11 00:15:16.547 /dev/nbd12 00:15:16.547 /dev/nbd13' 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:16.547 256+0 records in 00:15:16.547 256+0 records out 00:15:16.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730758 s, 143 MB/s 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.547 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:16.806 256+0 records in 00:15:16.806 256+0 records out 00:15:16.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170553 s, 6.1 MB/s 00:15:16.806 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.806 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:16.806 256+0 records in 00:15:16.806 256+0 records out 00:15:16.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181213 s, 5.8 MB/s 00:15:16.806 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:16.806 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:17.065 256+0 records in 00:15:17.065 256+0 records out 00:15:17.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175664 s, 6.0 MB/s 00:15:17.065 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.066 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:17.325 256+0 records in 00:15:17.325 256+0 records out 00:15:17.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171668 s, 6.1 MB/s 00:15:17.325 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.325 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:17.325 256+0 records in 00:15:17.325 256+0 records out 00:15:17.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165714 s, 6.3 MB/s 00:15:17.325 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:17.325 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:17.584 256+0 records in 00:15:17.584 256+0 records out 00:15:17.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173916 s, 6.0 MB/s 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.584 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.843 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.410 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.669 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.931 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.190 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:19.449 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:19.449 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:19.449 05:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:19.449 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:19.707 malloc_lvol_verify 00:15:19.707 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:19.966 f06a5dca-35b1-4057-9f94-baeef5327591 00:15:19.966 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:20.224 ea326d13-83cc-4b55-b3fc-729b0cbf299c 00:15:20.224 05:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:20.484 /dev/nbd0 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:20.484 mke2fs 1.46.5 (30-Dec-2021) 00:15:20.484 Discarding device blocks: 0/4096 done 00:15:20.484 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:20.484 00:15:20.484 Allocating group tables: 0/1 done 00:15:20.484 Writing inode tables: 0/1 done 00:15:20.484 Creating journal (1024 blocks): done 00:15:20.484 Writing superblocks and filesystem accounting information: 0/1 done 00:15:20.484 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.484 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 76005 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76005 ']' 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76005 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76005 00:15:20.745 killing process with pid 76005 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76005' 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76005 00:15:20.745 05:04:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76005 00:15:22.121 ************************************ 00:15:22.121 END TEST bdev_nbd 00:15:22.121 ************************************ 00:15:22.121 05:04:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:22.121 00:15:22.121 real 0m12.165s 00:15:22.121 user 0m16.873s 00:15:22.121 sys 0m4.113s 00:15:22.121 05:04:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.121 05:04:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:22.121 05:04:36 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:22.121 05:04:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:22.121 05:04:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:22.121 05:04:36 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:22.121 05:04:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.121 05:04:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.121 05:04:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.121 ************************************ 00:15:22.121 START TEST bdev_fio 00:15:22.121 ************************************ 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:22.121 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=verify 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type=AIO 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z verify ']' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1311 -- # '[' verify == verify ']' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1312 -- # cat 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # '[' AIO == AIO ']' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1322 -- # /usr/src/fio/fio --version 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1322 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # echo serialize_overlap=1 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.121 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 ************************************ 00:15:22.122 START TEST bdev_fio_rw_verify 00:15:22.122 ************************************ 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local sanitizers 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # shift 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local asan_lib= 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # grep libasan 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # break 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:22.122 05:04:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:22.380 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:22.381 fio-3.35 00:15:22.381 Starting 6 threads 00:15:34.587 00:15:34.587 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76418: Wed Jul 24 05:04:47 2024 00:15:34.587 read: IOPS=27.9k, BW=109MiB/s (114MB/s)(1088MiB/10001msec) 00:15:34.587 slat (usec): min=2, max=919, avg= 7.30, stdev= 5.16 00:15:34.587 clat (usec): min=102, max=5305, avg=673.22, stdev=229.59 00:15:34.587 lat (usec): min=108, max=5317, avg=680.52, stdev=230.41 00:15:34.587 clat percentiles (usec): 00:15:34.587 | 50.000th=[ 701], 99.000th=[ 1188], 99.900th=[ 1745], 99.990th=[ 3621], 00:15:34.587 | 99.999th=[ 5276] 00:15:34.587 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(1096MiB/10001msec); 0 zone resets 00:15:34.587 slat (usec): min=10, max=2681, avg=26.96, stdev=26.92 00:15:34.587 clat (usec): min=93, max=5291, avg=762.76, stdev=242.09 00:15:34.587 lat (usec): min=114, max=5314, avg=789.72, stdev=243.96 00:15:34.587 clat percentiles (usec): 00:15:34.587 | 50.000th=[ 775], 99.000th=[ 1385], 99.900th=[ 2114], 99.990th=[ 3490], 00:15:34.587 | 99.999th=[ 5276] 00:15:34.587 bw ( KiB/s): min=98687, max=138608, per=100.00%, avg=112848.00, stdev=2016.62, samples=114 00:15:34.587 iops : min=24671, max=34652, avg=28211.89, stdev=504.17, samples=114 00:15:34.587 lat (usec) : 100=0.01%, 250=2.35%, 500=15.77%, 750=35.30%, 1000=37.29% 00:15:34.587 lat (msec) : 2=9.20%, 4=0.09%, 10=0.01% 00:15:34.587 cpu : usr=60.81%, sys=26.18%, ctx=7089, majf=0, minf=23826 00:15:34.587 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.587 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.587 issued rwts: total=278549,280595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.587 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:34.587 00:15:34.587 Run status group 0 (all jobs): 00:15:34.587 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1088MiB (1141MB), run=10001-10001msec 00:15:34.587 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1096MiB (1149MB), run=10001-10001msec 00:15:34.587 ----------------------------------------------------- 00:15:34.587 Suppressions used: 00:15:34.587 count bytes template 00:15:34.587 6 48 /usr/src/fio/parse.c 00:15:34.587 1849 177504 /usr/src/fio/iolog.c 00:15:34.587 1 8 libtcmalloc_minimal.so 00:15:34.587 1 904 libcrypto.so 00:15:34.587 ----------------------------------------------------- 00:15:34.587 00:15:34.587 00:15:34.587 real 0m12.063s 00:15:34.587 user 0m38.125s 00:15:34.587 sys 0m16.018s 00:15:34.587 05:04:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.587 05:04:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:34.587 ************************************ 00:15:34.587 END TEST bdev_fio_rw_verify 00:15:34.587 ************************************ 00:15:34.587 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=trim 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type= 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z trim ']' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1311 -- # '[' trim == verify ']' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # '[' trim == trim ']' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo rw=trimwrite 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9c30eb74-694b-4288-8ee0-2db744f9c0b2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9c30eb74-694b-4288-8ee0-2db744f9c0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6d11cd99-341a-4762-be09-1e93f2d99508"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d11cd99-341a-4762-be09-1e93f2d99508",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "009bc1d4-54c1-41e5-b848-4fe57211f1b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "009bc1d4-54c1-41e5-b848-4fe57211f1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "96506e05-296d-4f49-bff0-200b8d45c415"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96506e05-296d-4f49-bff0-200b8d45c415",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "43829a63-fe06-46e1-906c-8d335fae05ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43829a63-fe06-46e1-906c-8d335fae05ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "159ffcf7-39a9-490f-bf80-b3eb71adc3e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "159ffcf7-39a9-490f-bf80-b3eb71adc3e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:34.588 /home/vagrant/spdk_repo/spdk 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:34.588 00:15:34.588 real 0m12.242s 00:15:34.588 user 0m38.227s 00:15:34.588 sys 0m16.092s 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.588 05:04:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:34.588 ************************************ 00:15:34.588 END TEST bdev_fio 00:15:34.588 ************************************ 00:15:34.588 05:04:48 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:34.588 05:04:48 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:34.588 05:04:48 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:34.588 05:04:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.588 05:04:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.588 ************************************ 00:15:34.588 START TEST bdev_verify 00:15:34.588 ************************************ 00:15:34.588 05:04:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:34.588 [2024-07-24 05:04:48.876409] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:34.588 [2024-07-24 05:04:48.876604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76593 ] 00:15:34.588 [2024-07-24 05:04:49.049314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.588 [2024-07-24 05:04:49.210775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.588 [2024-07-24 05:04:49.210794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.158 Running I/O for 5 seconds... 00:15:40.431 00:15:40.431 Latency(us) 00:15:40.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.431 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0xa0000 00:15:40.431 nvme0n1 : 5.04 1702.93 6.65 0.00 0.00 75026.96 9413.35 77213.32 00:15:40.431 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0xa0000 length 0xa0000 00:15:40.431 nvme0n1 : 5.06 1645.61 6.43 0.00 0.00 77641.39 9889.98 71493.82 00:15:40.431 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0xbd0bd 00:15:40.431 nvme1n1 : 5.06 2993.81 11.69 0.00 0.00 42496.32 4200.26 63391.19 00:15:40.431 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:40.431 nvme1n1 : 5.05 2790.84 10.90 0.00 0.00 45641.69 5510.98 67680.81 00:15:40.431 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0x80000 00:15:40.431 nvme2n1 : 5.05 1570.93 6.14 0.00 0.00 80869.58 5123.72 101521.22 00:15:40.431 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x80000 length 0x80000 00:15:40.431 nvme2n1 : 5.05 1547.66 6.05 0.00 0.00 82297.32 19660.80 108670.60 00:15:40.431 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0x80000 00:15:40.431 nvme2n2 : 5.07 1716.47 6.70 0.00 0.00 73845.63 3589.59 71970.44 00:15:40.431 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x80000 length 0x80000 00:15:40.431 nvme2n2 : 5.05 1646.56 6.43 0.00 0.00 77285.27 10545.34 77689.95 00:15:40.431 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0x80000 00:15:40.431 nvme2n3 : 5.06 1694.47 6.62 0.00 0.00 74714.83 10545.34 71970.44 00:15:40.431 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x80000 length 0x80000 00:15:40.431 nvme2n3 : 5.06 1643.97 6.42 0.00 0.00 77263.72 13405.09 71970.44 00:15:40.431 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x0 length 0x20000 00:15:40.431 nvme3n1 : 5.07 1693.17 6.61 0.00 0.00 74714.83 7745.16 81026.33 00:15:40.431 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.431 Verification LBA range: start 0x20000 length 0x20000 00:15:40.431 nvme3n1 : 5.06 1642.72 6.42 0.00 0.00 77187.04 5659.93 75783.45 00:15:40.431 =================================================================================================================== 00:15:40.431 Total : 22289.14 87.07 0.00 0.00 68442.61 3589.59 108670.60 00:15:41.367 00:15:41.367 real 0m7.029s 00:15:41.367 user 0m10.971s 00:15:41.367 sys 0m1.755s 00:15:41.367 05:04:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.367 05:04:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:41.367 ************************************ 00:15:41.367 END TEST bdev_verify 00:15:41.367 ************************************ 00:15:41.367 05:04:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:41.367 05:04:55 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:41.367 05:04:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.367 05:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.367 ************************************ 00:15:41.367 START TEST bdev_verify_big_io 00:15:41.367 ************************************ 00:15:41.367 05:04:55 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:41.367 [2024-07-24 05:04:55.969317] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:41.367 [2024-07-24 05:04:55.969531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76691 ] 00:15:41.625 [2024-07-24 05:04:56.159924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.884 [2024-07-24 05:04:56.316459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.884 [2024-07-24 05:04:56.316476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.451 Running I/O for 5 seconds... 00:15:49.038 00:15:49.038 Latency(us) 00:15:49.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.038 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0xa000 00:15:49.038 nvme0n1 : 5.95 161.23 10.08 0.00 0.00 766492.67 122016.12 934185.89 00:15:49.038 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0xa000 length 0xa000 00:15:49.038 nvme0n1 : 5.96 115.51 7.22 0.00 0.00 1075060.15 18111.77 1319299.26 00:15:49.038 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0xbd0b 00:15:49.038 nvme1n1 : 5.96 147.72 9.23 0.00 0.00 814148.58 43849.54 1433689.37 00:15:49.038 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:49.038 nvme1n1 : 5.86 160.97 10.06 0.00 0.00 751716.48 8162.21 976128.93 00:15:49.038 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0x8000 00:15:49.038 nvme2n1 : 5.96 134.17 8.39 0.00 0.00 871396.59 26691.03 960876.92 00:15:49.038 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x8000 length 0x8000 00:15:49.038 nvme2n1 : 5.95 139.89 8.74 0.00 0.00 835045.50 153473.40 1540453.47 00:15:49.038 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0x8000 00:15:49.038 nvme2n2 : 5.98 101.72 6.36 0.00 0.00 1137997.78 157.32 2608094.49 00:15:49.038 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x8000 length 0x8000 00:15:49.038 nvme2n2 : 5.87 132.24 8.26 0.00 0.00 861835.87 70063.94 1128649.08 00:15:49.038 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0x8000 00:15:49.038 nvme2n3 : 5.97 104.53 6.53 0.00 0.00 1076988.92 8996.31 2409818.30 00:15:49.038 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x8000 length 0x8000 00:15:49.038 nvme2n3 : 5.87 109.00 6.81 0.00 0.00 1012037.91 28597.53 2181038.08 00:15:49.038 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x0 length 0x2000 00:15:49.038 nvme3n1 : 5.97 147.50 9.22 0.00 0.00 741009.35 3768.32 1403185.34 00:15:49.038 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.038 Verification LBA range: start 0x2000 length 0x2000 00:15:49.038 nvme3n1 : 5.96 122.11 7.63 0.00 0.00 883994.20 6315.29 2242046.14 00:15:49.038 =================================================================================================================== 00:15:49.038 Total : 1576.58 98.54 0.00 0.00 883492.44 157.32 2608094.49 00:15:49.605 00:15:49.605 real 0m8.156s 00:15:49.605 user 0m14.619s 00:15:49.605 sys 0m0.540s 00:15:49.605 05:05:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.605 05:05:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.605 ************************************ 00:15:49.605 END TEST bdev_verify_big_io 00:15:49.605 ************************************ 00:15:49.605 05:05:04 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.605 05:05:04 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:49.605 05:05:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.605 05:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.605 ************************************ 00:15:49.605 START TEST bdev_write_zeroes 00:15:49.605 ************************************ 00:15:49.605 05:05:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.605 [2024-07-24 05:05:04.176447] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:49.605 [2024-07-24 05:05:04.176655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76803 ] 00:15:49.863 [2024-07-24 05:05:04.351947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.122 [2024-07-24 05:05:04.509102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.381 Running I/O for 1 seconds... 00:15:51.317 00:15:51.317 Latency(us) 00:15:51.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.317 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme0n1 : 1.02 10522.16 41.10 0.00 0.00 12151.78 6940.86 19184.17 00:15:51.317 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme1n1 : 1.02 16352.86 63.88 0.00 0.00 7809.98 4319.42 15966.95 00:15:51.317 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme2n1 : 1.02 10506.06 41.04 0.00 0.00 12090.15 7030.23 20852.36 00:15:51.317 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme2n2 : 1.02 10490.50 40.98 0.00 0.00 12099.65 7030.23 20971.52 00:15:51.317 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme2n3 : 1.03 10475.03 40.92 0.00 0.00 12108.91 6911.07 20971.52 00:15:51.317 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:51.317 nvme3n1 : 1.03 10459.40 40.86 0.00 0.00 12116.28 6911.07 20971.52 00:15:51.317 =================================================================================================================== 00:15:51.317 Total : 68806.02 268.77 0.00 0.00 11094.70 4319.42 20971.52 00:15:52.695 00:15:52.695 real 0m2.906s 00:15:52.695 user 0m2.173s 00:15:52.695 sys 0m0.561s 00:15:52.695 ************************************ 00:15:52.695 END TEST bdev_write_zeroes 00:15:52.695 ************************************ 00:15:52.695 05:05:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.695 05:05:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:52.695 05:05:07 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.695 05:05:07 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:52.695 05:05:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.695 05:05:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.695 ************************************ 00:15:52.695 START TEST bdev_json_nonenclosed 00:15:52.695 ************************************ 00:15:52.695 05:05:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.695 [2024-07-24 05:05:07.128437] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:52.695 [2024-07-24 05:05:07.128635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76849 ] 00:15:52.695 [2024-07-24 05:05:07.291426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.954 [2024-07-24 05:05:07.443737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.954 [2024-07-24 05:05:07.443889] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:52.954 [2024-07-24 05:05:07.443919] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:52.954 [2024-07-24 05:05:07.443934] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:53.213 00:15:53.213 real 0m0.781s 00:15:53.213 user 0m0.538s 00:15:53.213 sys 0m0.137s 00:15:53.213 05:05:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.213 05:05:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:53.213 ************************************ 00:15:53.213 END TEST bdev_json_nonenclosed 00:15:53.213 ************************************ 00:15:53.475 05:05:07 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.475 05:05:07 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:53.475 05:05:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.475 05:05:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.475 ************************************ 00:15:53.475 START TEST bdev_json_nonarray 00:15:53.475 ************************************ 00:15:53.476 05:05:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.476 [2024-07-24 05:05:07.957871] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:53.476 [2024-07-24 05:05:07.958065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:15:53.751 [2024-07-24 05:05:08.132696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.751 [2024-07-24 05:05:08.302949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.751 [2024-07-24 05:05:08.303101] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:53.751 [2024-07-24 05:05:08.303131] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:53.751 [2024-07-24 05:05:08.303147] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:54.318 00:15:54.318 real 0m0.812s 00:15:54.318 user 0m0.573s 00:15:54.318 sys 0m0.132s 00:15:54.318 05:05:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.318 05:05:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:54.318 ************************************ 00:15:54.318 END TEST bdev_json_nonarray 00:15:54.318 ************************************ 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:54.318 05:05:08 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:54.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:56.261 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.261 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.261 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.261 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.261 00:15:56.261 real 1m0.103s 00:15:56.261 user 1m41.976s 00:15:56.261 sys 0m26.906s 00:15:56.261 05:05:10 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.261 05:05:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.261 ************************************ 00:15:56.261 END TEST blockdev_xnvme 00:15:56.261 ************************************ 00:15:56.261 05:05:10 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:56.261 05:05:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:56.261 05:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.261 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:15:56.261 ************************************ 00:15:56.261 START TEST ublk 00:15:56.261 ************************************ 00:15:56.261 05:05:10 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:56.261 * Looking for test storage... 00:15:56.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:56.261 05:05:10 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:56.261 05:05:10 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:56.261 05:05:10 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:56.261 05:05:10 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:56.261 05:05:10 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:56.261 05:05:10 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:56.261 05:05:10 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:56.261 05:05:10 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:56.261 05:05:10 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:56.520 05:05:10 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:56.520 05:05:10 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.520 05:05:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 ************************************ 00:15:56.520 START TEST test_save_ublk_config 00:15:56.520 ************************************ 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77165 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77165 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77165 ']' 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.520 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 [2024-07-24 05:05:11.024885] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:56.520 [2024-07-24 05:05:11.025068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77165 ] 00:15:56.779 [2024-07-24 05:05:11.195084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.038 [2024-07-24 05:05:11.436300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.605 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:57.605 [2024-07-24 05:05:12.134063] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:57.605 [2024-07-24 05:05:12.135292] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:57.605 malloc0 00:15:57.605 [2024-07-24 05:05:12.204247] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:57.605 [2024-07-24 05:05:12.204389] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:57.605 [2024-07-24 05:05:12.204404] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:57.605 [2024-07-24 05:05:12.204414] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:57.605 [2024-07-24 05:05:12.212028] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:57.605 [2024-07-24 05:05:12.212080] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:57.605 [2024-07-24 05:05:12.219995] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:57.605 [2024-07-24 05:05:12.220127] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:57.864 [2024-07-24 05:05:12.243956] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:57.864 0 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.864 05:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:57.864 "subsystems": [ 00:15:57.864 { 00:15:57.864 "subsystem": "keyring", 00:15:57.864 "config": [] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "iobuf", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "iobuf_set_options", 00:15:57.864 "params": { 00:15:57.864 "small_pool_count": 8192, 00:15:57.864 "large_pool_count": 1024, 00:15:57.864 "small_bufsize": 8192, 00:15:57.864 "large_bufsize": 135168 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "sock", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "sock_set_default_impl", 00:15:57.864 "params": { 00:15:57.864 "impl_name": "posix" 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "sock_impl_set_options", 00:15:57.864 "params": { 00:15:57.864 "impl_name": "ssl", 00:15:57.864 "recv_buf_size": 4096, 00:15:57.864 "send_buf_size": 4096, 00:15:57.864 "enable_recv_pipe": true, 00:15:57.864 "enable_quickack": false, 00:15:57.864 "enable_placement_id": 0, 00:15:57.864 "enable_zerocopy_send_server": true, 00:15:57.864 "enable_zerocopy_send_client": false, 00:15:57.864 "zerocopy_threshold": 0, 00:15:57.864 "tls_version": 0, 00:15:57.864 "enable_ktls": false 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "sock_impl_set_options", 00:15:57.864 "params": { 00:15:57.864 "impl_name": "posix", 00:15:57.864 "recv_buf_size": 2097152, 00:15:57.864 "send_buf_size": 2097152, 00:15:57.864 "enable_recv_pipe": true, 00:15:57.864 "enable_quickack": false, 00:15:57.864 "enable_placement_id": 0, 00:15:57.864 "enable_zerocopy_send_server": true, 00:15:57.864 "enable_zerocopy_send_client": false, 00:15:57.864 "zerocopy_threshold": 0, 00:15:57.864 "tls_version": 0, 00:15:57.864 "enable_ktls": false 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "vmd", 00:15:57.864 "config": [] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "accel", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "accel_set_options", 00:15:57.864 "params": { 00:15:57.864 "small_cache_size": 128, 00:15:57.864 "large_cache_size": 16, 00:15:57.864 "task_count": 2048, 00:15:57.864 "sequence_count": 2048, 00:15:57.864 "buf_count": 2048 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "bdev", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "bdev_set_options", 00:15:57.864 "params": { 00:15:57.864 "bdev_io_pool_size": 65535, 00:15:57.864 "bdev_io_cache_size": 256, 00:15:57.864 "bdev_auto_examine": true, 00:15:57.864 "iobuf_small_cache_size": 128, 00:15:57.864 "iobuf_large_cache_size": 16 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_raid_set_options", 00:15:57.864 "params": { 00:15:57.864 "process_window_size_kb": 1024, 00:15:57.864 "process_max_bandwidth_mb_sec": 0 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_iscsi_set_options", 00:15:57.864 "params": { 00:15:57.864 "timeout_sec": 30 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_nvme_set_options", 00:15:57.864 "params": { 00:15:57.864 "action_on_timeout": "none", 00:15:57.864 "timeout_us": 0, 00:15:57.864 "timeout_admin_us": 0, 00:15:57.864 "keep_alive_timeout_ms": 10000, 00:15:57.864 "arbitration_burst": 0, 00:15:57.864 "low_priority_weight": 0, 00:15:57.864 "medium_priority_weight": 0, 00:15:57.864 "high_priority_weight": 0, 00:15:57.864 "nvme_adminq_poll_period_us": 10000, 00:15:57.864 "nvme_ioq_poll_period_us": 0, 00:15:57.864 "io_queue_requests": 0, 00:15:57.864 "delay_cmd_submit": true, 00:15:57.864 "transport_retry_count": 4, 00:15:57.864 "bdev_retry_count": 3, 00:15:57.864 "transport_ack_timeout": 0, 00:15:57.864 "ctrlr_loss_timeout_sec": 0, 00:15:57.864 "reconnect_delay_sec": 0, 00:15:57.864 "fast_io_fail_timeout_sec": 0, 00:15:57.864 "disable_auto_failback": false, 00:15:57.864 "generate_uuids": false, 00:15:57.864 "transport_tos": 0, 00:15:57.864 "nvme_error_stat": false, 00:15:57.864 "rdma_srq_size": 0, 00:15:57.864 "io_path_stat": false, 00:15:57.864 "allow_accel_sequence": false, 00:15:57.864 "rdma_max_cq_size": 0, 00:15:57.864 "rdma_cm_event_timeout_ms": 0, 00:15:57.864 "dhchap_digests": [ 00:15:57.864 "sha256", 00:15:57.864 "sha384", 00:15:57.864 "sha512" 00:15:57.864 ], 00:15:57.864 "dhchap_dhgroups": [ 00:15:57.864 "null", 00:15:57.864 "ffdhe2048", 00:15:57.864 "ffdhe3072", 00:15:57.864 "ffdhe4096", 00:15:57.864 "ffdhe6144", 00:15:57.864 "ffdhe8192" 00:15:57.864 ] 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_nvme_set_hotplug", 00:15:57.864 "params": { 00:15:57.864 "period_us": 100000, 00:15:57.864 "enable": false 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_malloc_create", 00:15:57.864 "params": { 00:15:57.864 "name": "malloc0", 00:15:57.864 "num_blocks": 8192, 00:15:57.864 "block_size": 4096, 00:15:57.864 "physical_block_size": 4096, 00:15:57.864 "uuid": "2862510a-2c20-4cea-a9d2-9ffcf188d1c0", 00:15:57.864 "optimal_io_boundary": 0, 00:15:57.864 "md_size": 0, 00:15:57.864 "dif_type": 0, 00:15:57.864 "dif_is_head_of_md": false, 00:15:57.864 "dif_pi_format": 0 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "bdev_wait_for_examine" 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "scsi", 00:15:57.864 "config": null 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "scheduler", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "framework_set_scheduler", 00:15:57.864 "params": { 00:15:57.864 "name": "static" 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "vhost_scsi", 00:15:57.864 "config": [] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "vhost_blk", 00:15:57.864 "config": [] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "ublk", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "ublk_create_target", 00:15:57.864 "params": { 00:15:57.864 "cpumask": "1" 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "ublk_start_disk", 00:15:57.864 "params": { 00:15:57.864 "bdev_name": "malloc0", 00:15:57.864 "ublk_id": 0, 00:15:57.864 "num_queues": 1, 00:15:57.864 "queue_depth": 128 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "nbd", 00:15:57.864 "config": [] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "nvmf", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "nvmf_set_config", 00:15:57.864 "params": { 00:15:57.864 "discovery_filter": "match_any", 00:15:57.864 "admin_cmd_passthru": { 00:15:57.864 "identify_ctrlr": false 00:15:57.864 } 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "nvmf_set_max_subsystems", 00:15:57.864 "params": { 00:15:57.864 "max_subsystems": 1024 00:15:57.864 } 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "method": "nvmf_set_crdt", 00:15:57.864 "params": { 00:15:57.864 "crdt1": 0, 00:15:57.864 "crdt2": 0, 00:15:57.864 "crdt3": 0 00:15:57.864 } 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "subsystem": "iscsi", 00:15:57.864 "config": [ 00:15:57.864 { 00:15:57.864 "method": "iscsi_set_options", 00:15:57.865 "params": { 00:15:57.865 "node_base": "iqn.2016-06.io.spdk", 00:15:57.865 "max_sessions": 128, 00:15:57.865 "max_connections_per_session": 2, 00:15:57.865 "max_queue_depth": 64, 00:15:57.865 "default_time2wait": 2, 00:15:57.865 "default_time2retain": 20, 00:15:57.865 "first_burst_length": 8192, 00:15:57.865 "immediate_data": true, 00:15:57.865 "allow_duplicated_isid": false, 00:15:57.865 "error_recovery_level": 0, 00:15:57.865 "nop_timeout": 60, 00:15:57.865 "nop_in_interval": 30, 00:15:57.865 "disable_chap": false, 00:15:57.865 "require_chap": false, 00:15:57.865 "mutual_chap": false, 00:15:57.865 "chap_group": 0, 00:15:57.865 "max_large_datain_per_connection": 64, 00:15:57.865 "max_r2t_per_connection": 4, 00:15:57.865 "pdu_pool_size": 36864, 00:15:57.865 "immediate_data_pool_size": 16384, 00:15:57.865 "data_out_pool_size": 2048 00:15:57.865 } 00:15:57.865 } 00:15:57.865 ] 00:15:57.865 } 00:15:57.865 ] 00:15:57.865 }' 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77165 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77165 ']' 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77165 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.865 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77165 00:15:58.123 killing process with pid 77165 00:15:58.123 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.123 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.123 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77165' 00:15:58.123 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77165 00:15:58.123 05:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77165 00:15:59.496 [2024-07-24 05:05:14.060147] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:59.496 [2024-07-24 05:05:14.098061] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:59.496 [2024-07-24 05:05:14.098320] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:59.496 [2024-07-24 05:05:14.106027] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:59.496 [2024-07-24 05:05:14.106102] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:59.496 [2024-07-24 05:05:14.106117] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:59.496 [2024-07-24 05:05:14.106166] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:59.496 [2024-07-24 05:05:14.106444] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77221 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77221 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77221 ']' 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:00.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.871 05:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:00.871 "subsystems": [ 00:16:00.871 { 00:16:00.871 "subsystem": "keyring", 00:16:00.871 "config": [] 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "subsystem": "iobuf", 00:16:00.871 "config": [ 00:16:00.871 { 00:16:00.871 "method": "iobuf_set_options", 00:16:00.871 "params": { 00:16:00.871 "small_pool_count": 8192, 00:16:00.871 "large_pool_count": 1024, 00:16:00.871 "small_bufsize": 8192, 00:16:00.871 "large_bufsize": 135168 00:16:00.871 } 00:16:00.871 } 00:16:00.871 ] 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "subsystem": "sock", 00:16:00.871 "config": [ 00:16:00.871 { 00:16:00.871 "method": "sock_set_default_impl", 00:16:00.871 "params": { 00:16:00.871 "impl_name": "posix" 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "sock_impl_set_options", 00:16:00.871 "params": { 00:16:00.871 "impl_name": "ssl", 00:16:00.871 "recv_buf_size": 4096, 00:16:00.871 "send_buf_size": 4096, 00:16:00.871 "enable_recv_pipe": true, 00:16:00.871 "enable_quickack": false, 00:16:00.871 "enable_placement_id": 0, 00:16:00.871 "enable_zerocopy_send_server": true, 00:16:00.871 "enable_zerocopy_send_client": false, 00:16:00.871 "zerocopy_threshold": 0, 00:16:00.871 "tls_version": 0, 00:16:00.871 "enable_ktls": false 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "sock_impl_set_options", 00:16:00.871 "params": { 00:16:00.871 "impl_name": "posix", 00:16:00.871 "recv_buf_size": 2097152, 00:16:00.871 "send_buf_size": 2097152, 00:16:00.871 "enable_recv_pipe": true, 00:16:00.871 "enable_quickack": false, 00:16:00.871 "enable_placement_id": 0, 00:16:00.871 "enable_zerocopy_send_server": true, 00:16:00.871 "enable_zerocopy_send_client": false, 00:16:00.871 "zerocopy_threshold": 0, 00:16:00.871 "tls_version": 0, 00:16:00.871 "enable_ktls": false 00:16:00.871 } 00:16:00.871 } 00:16:00.871 ] 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "subsystem": "vmd", 00:16:00.871 "config": [] 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "subsystem": "accel", 00:16:00.871 "config": [ 00:16:00.871 { 00:16:00.871 "method": "accel_set_options", 00:16:00.871 "params": { 00:16:00.871 "small_cache_size": 128, 00:16:00.871 "large_cache_size": 16, 00:16:00.871 "task_count": 2048, 00:16:00.871 "sequence_count": 2048, 00:16:00.871 "buf_count": 2048 00:16:00.871 } 00:16:00.871 } 00:16:00.871 ] 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "subsystem": "bdev", 00:16:00.871 "config": [ 00:16:00.871 { 00:16:00.871 "method": "bdev_set_options", 00:16:00.871 "params": { 00:16:00.871 "bdev_io_pool_size": 65535, 00:16:00.871 "bdev_io_cache_size": 256, 00:16:00.871 "bdev_auto_examine": true, 00:16:00.871 "iobuf_small_cache_size": 128, 00:16:00.871 "iobuf_large_cache_size": 16 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "bdev_raid_set_options", 00:16:00.871 "params": { 00:16:00.871 "process_window_size_kb": 1024, 00:16:00.871 "process_max_bandwidth_mb_sec": 0 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "bdev_iscsi_set_options", 00:16:00.871 "params": { 00:16:00.871 "timeout_sec": 30 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "bdev_nvme_set_options", 00:16:00.871 "params": { 00:16:00.871 "action_on_timeout": "none", 00:16:00.871 "timeout_us": 0, 00:16:00.871 "timeout_admin_us": 0, 00:16:00.871 "keep_alive_timeout_ms": 10000, 00:16:00.871 "arbitration_burst": 0, 00:16:00.871 "low_priority_weight": 0, 00:16:00.871 "medium_priority_weight": 0, 00:16:00.871 "high_priority_weight": 0, 00:16:00.871 "nvme_adminq_poll_period_us": 10000, 00:16:00.871 "nvme_ioq_poll_period_us": 0, 00:16:00.871 "io_queue_requests": 0, 00:16:00.871 "delay_cmd_submit": true, 00:16:00.871 "transport_retry_count": 4, 00:16:00.871 "bdev_retry_count": 3, 00:16:00.871 "transport_ack_timeout": 0, 00:16:00.871 "ctrlr_loss_timeout_sec": 0, 00:16:00.871 "reconnect_delay_sec": 0, 00:16:00.871 "fast_io_fail_timeout_sec": 0, 00:16:00.871 "disable_auto_failback": false, 00:16:00.871 "generate_uuids": false, 00:16:00.871 "transport_tos": 0, 00:16:00.871 "nvme_error_stat": false, 00:16:00.871 "rdma_srq_size": 0, 00:16:00.871 "io_path_stat": false, 00:16:00.871 "allow_accel_sequence": false, 00:16:00.871 "rdma_max_cq_size": 0, 00:16:00.871 "rdma_cm_event_timeout_ms": 0, 00:16:00.871 "dhchap_digests": [ 00:16:00.871 "sha256", 00:16:00.871 "sha384", 00:16:00.871 "sha512" 00:16:00.871 ], 00:16:00.871 "dhchap_dhgroups": [ 00:16:00.871 "null", 00:16:00.871 "ffdhe2048", 00:16:00.871 "ffdhe3072", 00:16:00.871 "ffdhe4096", 00:16:00.871 "ffdhe6144", 00:16:00.871 "ffdhe8192" 00:16:00.871 ] 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "method": "bdev_nvme_set_hotplug", 00:16:00.871 "params": { 00:16:00.871 "period_us": 100000, 00:16:00.872 "enable": false 00:16:00.872 } 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "method": "bdev_malloc_create", 00:16:00.872 "params": { 00:16:00.872 "name": "malloc0", 00:16:00.872 "num_blocks": 8192, 00:16:00.872 "block_size": 4096, 00:16:00.872 "physical_block_size": 4096, 00:16:00.872 "uuid": "2862510a-2c20-4cea-a9d2-9ffcf188d1c0", 00:16:00.872 "optimal_io_boundary": 0, 00:16:00.872 "md_size": 0, 00:16:00.872 "dif_type": 0, 00:16:00.872 "dif_is_head_of_md": false, 00:16:00.872 "dif_pi_format": 0 00:16:00.872 } 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "method": "bdev_wait_for_examine" 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "scsi", 00:16:00.872 "config": null 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "scheduler", 00:16:00.872 "config": [ 00:16:00.872 { 00:16:00.872 "method": "framework_set_scheduler", 00:16:00.872 "params": { 00:16:00.872 "name": "static" 00:16:00.872 } 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "vhost_scsi", 00:16:00.872 "config": [] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "vhost_blk", 00:16:00.872 "config": [] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "ublk", 00:16:00.872 "config": [ 00:16:00.872 { 00:16:00.872 "method": "ublk_create_target", 00:16:00.872 "params": { 00:16:00.872 "cpumask": "1" 00:16:00.872 } 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "method": "ublk_start_disk", 00:16:00.872 "params": { 00:16:00.872 "bdev_name": "malloc0", 00:16:00.872 "ublk_id": 0, 00:16:00.872 "num_queues": 1, 00:16:00.872 "queue_depth": 128 00:16:00.872 } 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "nbd", 00:16:00.872 "config": [] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "nvmf", 00:16:00.872 "config": [ 00:16:00.872 { 00:16:00.872 "method": "nvmf_set_config", 00:16:00.872 "params": { 00:16:00.872 "discovery_filter": "match_any", 00:16:00.872 "admin_cmd_passthru": { 00:16:00.872 "identify_ctrlr": false 00:16:00.872 } 00:16:00.872 } 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "method": "nvmf_set_max_subsystems", 00:16:00.872 "params": { 00:16:00.872 "max_subsystems": 1024 00:16:00.872 } 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "method": "nvmf_set_crdt", 00:16:00.872 "params": { 00:16:00.872 "crdt1": 0, 00:16:00.872 "crdt2": 0, 00:16:00.872 "crdt3": 0 00:16:00.872 } 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "subsystem": "iscsi", 00:16:00.872 "config": [ 00:16:00.872 { 00:16:00.872 "method": "iscsi_set_options", 00:16:00.872 "params": { 00:16:00.872 "node_base": "iqn.2016-06.io.spdk", 00:16:00.872 "max_sessions": 128, 00:16:00.872 05:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:00.872 "max_connections_per_session": 2, 00:16:00.872 "max_queue_depth": 64, 00:16:00.872 "default_time2wait": 2, 00:16:00.872 "default_time2retain": 20, 00:16:00.872 "first_burst_length": 8192, 00:16:00.872 "immediate_data": true, 00:16:00.872 "allow_duplicated_isid": false, 00:16:00.872 "error_recovery_level": 0, 00:16:00.872 "nop_timeout": 60, 00:16:00.872 "nop_in_interval": 30, 00:16:00.872 "disable_chap": false, 00:16:00.872 "require_chap": false, 00:16:00.872 "mutual_chap": false, 00:16:00.872 "chap_group": 0, 00:16:00.872 "max_large_datain_per_connection": 64, 00:16:00.872 "max_r2t_per_connection": 4, 00:16:00.872 "pdu_pool_size": 36864, 00:16:00.872 "immediate_data_pool_size": 16384, 00:16:00.872 "data_out_pool_size": 2048 00:16:00.872 } 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }' 00:16:00.872 [2024-07-24 05:05:15.359702] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:16:00.872 [2024-07-24 05:05:15.359877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77221 ] 00:16:01.143 [2024-07-24 05:05:15.526238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.423 [2024-07-24 05:05:15.765694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.989 [2024-07-24 05:05:16.558875] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:01.989 [2024-07-24 05:05:16.560005] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:01.989 [2024-07-24 05:05:16.566092] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:01.989 [2024-07-24 05:05:16.566226] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:01.989 [2024-07-24 05:05:16.566241] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:01.989 [2024-07-24 05:05:16.566250] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:01.989 [2024-07-24 05:05:16.573978] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:01.990 [2024-07-24 05:05:16.574003] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:01.990 [2024-07-24 05:05:16.580921] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:01.990 [2024-07-24 05:05:16.581041] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:01.990 [2024-07-24 05:05:16.600930] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77221 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77221 ']' 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77221 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77221 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.248 killing process with pid 77221 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77221' 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77221 00:16:02.248 05:05:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77221 00:16:03.622 [2024-07-24 05:05:17.989382] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:03.622 [2024-07-24 05:05:18.026965] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:03.622 [2024-07-24 05:05:18.027171] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:03.622 [2024-07-24 05:05:18.035905] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:03.622 [2024-07-24 05:05:18.035987] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:03.622 [2024-07-24 05:05:18.036000] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:03.622 [2024-07-24 05:05:18.036047] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:03.622 [2024-07-24 05:05:18.040111] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:04.557 05:05:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:04.557 00:16:04.557 real 0m8.223s 00:16:04.557 user 0m6.791s 00:16:04.557 sys 0m2.288s 00:16:04.557 ************************************ 00:16:04.557 END TEST test_save_ublk_config 00:16:04.557 ************************************ 00:16:04.557 05:05:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.557 05:05:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:04.557 05:05:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77302 00:16:04.557 05:05:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:04.557 05:05:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.557 05:05:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77302 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@829 -- # '[' -z 77302 ']' 00:16:04.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.557 05:05:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.816 [2024-07-24 05:05:19.263848] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:16:04.816 [2024-07-24 05:05:19.264297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77302 ] 00:16:04.816 [2024-07-24 05:05:19.419359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:05.074 [2024-07-24 05:05:19.581867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.074 [2024-07-24 05:05:19.581882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.641 05:05:20 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.641 05:05:20 ublk -- common/autotest_common.sh@862 -- # return 0 00:16:05.641 05:05:20 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:05.641 05:05:20 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:05.641 05:05:20 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.641 05:05:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.641 ************************************ 00:16:05.641 START TEST test_create_ublk 00:16:05.641 ************************************ 00:16:05.641 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:16:05.641 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:05.641 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.641 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.641 [2024-07-24 05:05:20.254905] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:05.641 [2024-07-24 05:05:20.257295] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:05.641 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.641 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:05.641 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:05.642 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.642 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.900 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.900 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:05.900 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:05.900 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.900 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.900 [2024-07-24 05:05:20.480168] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:05.900 [2024-07-24 05:05:20.480705] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:05.900 [2024-07-24 05:05:20.480730] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:05.900 [2024-07-24 05:05:20.480743] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:05.900 [2024-07-24 05:05:20.489238] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:05.900 [2024-07-24 05:05:20.489309] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:05.900 [2024-07-24 05:05:20.496037] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:05.900 [2024-07-24 05:05:20.508188] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:05.900 [2024-07-24 05:05:20.524964] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.158 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:06.158 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.158 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.158 05:05:20 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:06.158 { 00:16:06.158 "ublk_device": "/dev/ublkb0", 00:16:06.158 "id": 0, 00:16:06.158 "queue_depth": 512, 00:16:06.158 "num_queues": 4, 00:16:06.158 "bdev_name": "Malloc0" 00:16:06.158 } 00:16:06.158 ]' 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:06.158 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:06.417 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:06.417 05:05:20 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:06.417 05:05:20 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:06.417 fio: verification read phase will never start because write phase uses all of runtime 00:16:06.417 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:06.417 fio-3.35 00:16:06.417 Starting 1 process 00:16:18.613 00:16:18.613 fio_test: (groupid=0, jobs=1): err= 0: pid=77348: Wed Jul 24 05:05:31 2024 00:16:18.613 write: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(412MiB/10001msec); 0 zone resets 00:16:18.613 clat (usec): min=62, max=9327, avg=93.38, stdev=167.15 00:16:18.613 lat (usec): min=62, max=9349, avg=94.11, stdev=167.17 00:16:18.613 clat percentiles (usec): 00:16:18.613 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:16:18.613 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 81], 00:16:18.613 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 113], 00:16:18.613 | 99.00th=[ 135], 99.50th=[ 157], 99.90th=[ 3294], 99.95th=[ 3720], 00:16:18.613 | 99.99th=[ 4113] 00:16:18.613 bw ( KiB/s): min=17768, max=44936, per=99.97%, avg=42153.68, stdev=5939.28, samples=19 00:16:18.613 iops : min= 4442, max=11234, avg=10538.42, stdev=1484.82, samples=19 00:16:18.613 lat (usec) : 100=86.81%, 250=12.76%, 500=0.02%, 750=0.02%, 1000=0.02% 00:16:18.613 lat (msec) : 2=0.11%, 4=0.24%, 10=0.02% 00:16:18.613 cpu : usr=2.71%, sys=7.74%, ctx=105429, majf=0, minf=797 00:16:18.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.613 issued rwts: total=0,105428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.613 00:16:18.613 Run status group 0 (all jobs): 00:16:18.613 WRITE: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=412MiB (432MB), run=10001-10001msec 00:16:18.613 00:16:18.613 Disk stats (read/write): 00:16:18.613 ublkb0: ios=0/104290, merge=0/0, ticks=0/8930, in_queue=8931, util=99.11% 00:16:18.613 05:05:31 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 [2024-07-24 05:05:31.043676] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:18.613 [2024-07-24 05:05:31.088445] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:18.613 [2024-07-24 05:05:31.089643] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:18.613 [2024-07-24 05:05:31.101349] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:18.613 [2024-07-24 05:05:31.105321] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:18.613 [2024-07-24 05:05:31.105359] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 [2024-07-24 05:05:31.115062] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:18.613 request: 00:16:18.613 { 00:16:18.613 "ublk_id": 0, 00:16:18.613 "method": "ublk_stop_disk", 00:16:18.613 "req_id": 1 00:16:18.613 } 00:16:18.613 Got JSON-RPC error response 00:16:18.613 response: 00:16:18.613 { 00:16:18.613 "code": -19, 00:16:18.613 "message": "No such device" 00:16:18.613 } 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:18.613 05:05:31 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 [2024-07-24 05:05:31.130011] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:18.613 [2024-07-24 05:05:31.142925] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:18.613 [2024-07-24 05:05:31.142984] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:18.613 05:05:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:18.613 00:16:18.613 real 0m11.302s 00:16:18.613 user 0m0.709s 00:16:18.613 sys 0m0.880s 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 ************************************ 00:16:18.613 END TEST test_create_ublk 00:16:18.613 ************************************ 00:16:18.613 05:05:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:18.613 05:05:31 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:18.613 05:05:31 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.613 05:05:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 ************************************ 00:16:18.613 START TEST test_create_multi_ublk 00:16:18.613 ************************************ 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 [2024-07-24 05:05:31.613940] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:18.613 [2024-07-24 05:05:31.616387] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:18.613 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 [2024-07-24 05:05:31.844195] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:18.614 [2024-07-24 05:05:31.844772] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:18.614 [2024-07-24 05:05:31.844800] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.614 [2024-07-24 05:05:31.844812] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.614 [2024-07-24 05:05:31.852174] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.614 [2024-07-24 05:05:31.852196] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.614 [2024-07-24 05:05:31.859000] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.614 [2024-07-24 05:05:31.859772] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.614 [2024-07-24 05:05:31.870020] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 [2024-07-24 05:05:32.114073] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:18.614 [2024-07-24 05:05:32.114614] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:18.614 [2024-07-24 05:05:32.114653] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:18.614 [2024-07-24 05:05:32.114667] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.614 [2024-07-24 05:05:32.121899] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.614 [2024-07-24 05:05:32.121931] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.614 [2024-07-24 05:05:32.129022] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.614 [2024-07-24 05:05:32.129850] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:18.614 [2024-07-24 05:05:32.144995] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 [2024-07-24 05:05:32.370157] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:18.614 [2024-07-24 05:05:32.370715] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:18.614 [2024-07-24 05:05:32.370745] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:18.614 [2024-07-24 05:05:32.370756] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.614 [2024-07-24 05:05:32.377901] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.614 [2024-07-24 05:05:32.377928] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.614 [2024-07-24 05:05:32.384952] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.614 [2024-07-24 05:05:32.385703] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:18.614 [2024-07-24 05:05:32.394010] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 [2024-07-24 05:05:32.633069] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:18.614 [2024-07-24 05:05:32.633589] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:18.614 [2024-07-24 05:05:32.633612] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:18.614 [2024-07-24 05:05:32.633625] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.614 [2024-07-24 05:05:32.639898] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.614 [2024-07-24 05:05:32.639927] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.614 [2024-07-24 05:05:32.653929] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.614 [2024-07-24 05:05:32.654657] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:18.614 [2024-07-24 05:05:32.672073] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:18.614 { 00:16:18.614 "ublk_device": "/dev/ublkb0", 00:16:18.614 "id": 0, 00:16:18.614 "queue_depth": 512, 00:16:18.614 "num_queues": 4, 00:16:18.614 "bdev_name": "Malloc0" 00:16:18.614 }, 00:16:18.614 { 00:16:18.614 "ublk_device": "/dev/ublkb1", 00:16:18.614 "id": 1, 00:16:18.614 "queue_depth": 512, 00:16:18.614 "num_queues": 4, 00:16:18.614 "bdev_name": "Malloc1" 00:16:18.614 }, 00:16:18.614 { 00:16:18.614 "ublk_device": "/dev/ublkb2", 00:16:18.614 "id": 2, 00:16:18.614 "queue_depth": 512, 00:16:18.614 "num_queues": 4, 00:16:18.614 "bdev_name": "Malloc2" 00:16:18.614 }, 00:16:18.614 { 00:16:18.614 "ublk_device": "/dev/ublkb3", 00:16:18.614 "id": 3, 00:16:18.614 "queue_depth": 512, 00:16:18.614 "num_queues": 4, 00:16:18.614 "bdev_name": "Malloc3" 00:16:18.614 } 00:16:18.614 ]' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.614 05:05:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.615 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.873 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.132 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.132 [2024-07-24 05:05:33.752289] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.390 [2024-07-24 05:05:33.788266] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.390 [2024-07-24 05:05:33.789807] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.390 [2024-07-24 05:05:33.796025] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.390 [2024-07-24 05:05:33.796408] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.390 [2024-07-24 05:05:33.796429] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.390 [2024-07-24 05:05:33.812073] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.390 [2024-07-24 05:05:33.844406] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.390 [2024-07-24 05:05:33.846060] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.390 [2024-07-24 05:05:33.851964] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.390 [2024-07-24 05:05:33.852355] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:19.390 [2024-07-24 05:05:33.852375] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.390 [2024-07-24 05:05:33.860202] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.390 [2024-07-24 05:05:33.904364] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.390 [2024-07-24 05:05:33.905764] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.390 [2024-07-24 05:05:33.911939] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.390 [2024-07-24 05:05:33.912291] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:19.390 [2024-07-24 05:05:33.912311] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:19.390 [2024-07-24 05:05:33.928088] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.390 [2024-07-24 05:05:33.967115] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.390 [2024-07-24 05:05:33.968375] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.390 [2024-07-24 05:05:33.974990] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.390 [2024-07-24 05:05:33.975325] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:19.390 [2024-07-24 05:05:33.975343] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.390 05:05:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:19.661 [2024-07-24 05:05:34.268014] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.661 [2024-07-24 05:05:34.276032] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:19.661 [2024-07-24 05:05:34.276108] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:19.941 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:19.941 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.941 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:19.941 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.941 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.199 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.199 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.199 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:20.199 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.199 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.458 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.458 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.458 05:05:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:20.458 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.458 05:05:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.717 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.717 05:05:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.717 05:05:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:20.717 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.717 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.975 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.975 05:05:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:20.975 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:20.975 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:20.976 00:16:20.976 real 0m3.937s 00:16:20.976 user 0m1.292s 00:16:20.976 sys 0m0.199s 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.976 05:05:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.976 ************************************ 00:16:20.976 END TEST test_create_multi_ublk 00:16:20.976 ************************************ 00:16:20.976 05:05:35 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:20.976 05:05:35 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:20.976 05:05:35 ublk -- ublk/ublk.sh@130 -- # killprocess 77302 00:16:20.976 05:05:35 ublk -- common/autotest_common.sh@948 -- # '[' -z 77302 ']' 00:16:20.976 05:05:35 ublk -- common/autotest_common.sh@952 -- # kill -0 77302 00:16:20.976 05:05:35 ublk -- common/autotest_common.sh@953 -- # uname 00:16:20.976 05:05:35 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.976 05:05:35 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77302 00:16:21.234 05:05:35 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.234 killing process with pid 77302 00:16:21.234 05:05:35 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.234 05:05:35 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77302' 00:16:21.234 05:05:35 ublk -- common/autotest_common.sh@967 -- # kill 77302 00:16:21.234 05:05:35 ublk -- common/autotest_common.sh@972 -- # wait 77302 00:16:22.171 [2024-07-24 05:05:36.499154] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:22.171 [2024-07-24 05:05:36.499243] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:23.107 00:16:23.107 real 0m26.726s 00:16:23.107 user 0m40.410s 00:16:23.107 sys 0m8.061s 00:16:23.107 05:05:37 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.107 05:05:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:23.107 ************************************ 00:16:23.107 END TEST ublk 00:16:23.107 ************************************ 00:16:23.107 05:05:37 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:23.107 05:05:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:23.107 05:05:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.107 05:05:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.107 ************************************ 00:16:23.107 START TEST ublk_recovery 00:16:23.107 ************************************ 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:23.107 * Looking for test storage... 00:16:23.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:23.107 05:05:37 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77683 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:23.107 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77683 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77683 ']' 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.107 05:05:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.366 [2024-07-24 05:05:37.780665] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:16:23.366 [2024-07-24 05:05:37.780855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77683 ] 00:16:23.366 [2024-07-24 05:05:37.951512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:23.624 [2024-07-24 05:05:38.116090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.624 [2024-07-24 05:05:38.116104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:24.189 05:05:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.189 [2024-07-24 05:05:38.774922] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:24.189 [2024-07-24 05:05:38.777363] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.189 05:05:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.189 05:05:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.447 malloc0 00:16:24.447 05:05:38 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.447 05:05:38 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:24.447 05:05:38 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.447 05:05:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.447 [2024-07-24 05:05:38.904119] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:24.447 [2024-07-24 05:05:38.904268] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:24.447 [2024-07-24 05:05:38.904282] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:24.447 [2024-07-24 05:05:38.904293] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:24.447 [2024-07-24 05:05:38.913029] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:24.447 [2024-07-24 05:05:38.913062] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:24.447 [2024-07-24 05:05:38.922931] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:24.447 [2024-07-24 05:05:38.923127] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:24.447 1 00:16:24.447 [2024-07-24 05:05:38.930056] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:24.447 05:05:38 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.447 05:05:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:25.382 05:05:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77719 00:16:25.382 05:05:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:25.382 05:05:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:25.639 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.639 fio-3.35 00:16:25.639 Starting 1 process 00:16:30.922 05:05:44 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77683 00:16:30.922 05:05:44 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:36.192 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77683 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:36.192 05:05:49 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77825 00:16:36.192 05:05:49 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:36.192 05:05:49 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.192 05:05:49 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77825 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77825 ']' 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.192 05:05:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.192 [2024-07-24 05:05:50.072411] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:16:36.192 [2024-07-24 05:05:50.072589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77825 ] 00:16:36.192 [2024-07-24 05:05:50.240409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:36.192 [2024-07-24 05:05:50.410455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.192 [2024-07-24 05:05:50.410481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.451 05:05:51 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.451 05:05:51 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:36.451 05:05:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:36.451 05:05:51 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.451 05:05:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.451 [2024-07-24 05:05:51.077976] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:36.451 [2024-07-24 05:05:51.080570] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:36.709 05:05:51 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.709 05:05:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:36.709 05:05:51 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.709 05:05:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.710 malloc0 00:16:36.710 05:05:51 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.710 05:05:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:36.710 05:05:51 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.710 05:05:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.710 [2024-07-24 05:05:51.196106] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:36.710 [2024-07-24 05:05:51.196185] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:36.710 [2024-07-24 05:05:51.196197] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:36.710 [2024-07-24 05:05:51.204099] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:36.710 [2024-07-24 05:05:51.204126] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:36.710 [2024-07-24 05:05:51.204264] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:36.710 1 00:16:36.710 05:05:51 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.710 05:05:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77719 00:16:36.710 [2024-07-24 05:05:51.211970] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:36.710 [2024-07-24 05:05:51.219688] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:36.710 [2024-07-24 05:05:51.226333] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:36.710 [2024-07-24 05:05:51.226382] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:32.965 00:17:32.966 fio_test: (groupid=0, jobs=1): err= 0: pid=77722: Wed Jul 24 05:06:40 2024 00:17:32.966 read: IOPS=19.6k, BW=76.4MiB/s (80.1MB/s)(4585MiB/60002msec) 00:17:32.966 slat (nsec): min=1985, max=687640, avg=6001.01, stdev=2998.01 00:17:32.966 clat (usec): min=1046, max=6296.8k, avg=3231.34, stdev=47540.54 00:17:32.966 lat (usec): min=1051, max=6296.8k, avg=3237.34, stdev=47540.54 00:17:32.966 clat percentiles (usec): 00:17:32.966 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:17:32.966 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2802], 00:17:32.966 | 70.00th=[ 2868], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3720], 00:17:32.966 | 99.00th=[ 5407], 99.50th=[ 6194], 99.90th=[ 7570], 99.95th=[ 8455], 00:17:32.966 | 99.99th=[12911] 00:17:32.966 bw ( KiB/s): min=37752, max=93288, per=100.00%, avg=87017.79, stdev=7665.57, samples=107 00:17:32.966 iops : min= 9438, max=23322, avg=21754.44, stdev=1916.39, samples=107 00:17:32.966 write: IOPS=19.6k, BW=76.4MiB/s (80.1MB/s)(4582MiB/60002msec); 0 zone resets 00:17:32.966 slat (usec): min=2, max=786, avg= 6.07, stdev= 3.18 00:17:32.966 clat (usec): min=802, max=6297.0k, avg=3300.41, stdev=45376.04 00:17:32.966 lat (usec): min=807, max=6297.0k, avg=3306.49, stdev=45376.05 00:17:32.966 clat percentiles (usec): 00:17:32.966 | 1.00th=[ 2442], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:17:32.966 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2933], 00:17:32.966 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3621], 00:17:32.966 | 99.00th=[ 5407], 99.50th=[ 6259], 99.90th=[ 7504], 99.95th=[ 8225], 00:17:32.966 | 99.99th=[13042] 00:17:32.966 bw ( KiB/s): min=36632, max=93824, per=100.00%, avg=86974.89, stdev=7710.10, samples=107 00:17:32.966 iops : min= 9158, max=23456, avg=21743.72, stdev=1927.52, samples=107 00:17:32.966 lat (usec) : 1000=0.01% 00:17:32.966 lat (msec) : 2=0.16%, 4=95.88%, 10=3.94%, 20=0.01%, >=2000=0.01% 00:17:32.966 cpu : usr=10.18%, sys=21.94%, ctx=70241, majf=0, minf=13 00:17:32.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:32.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.966 issued rwts: total=1173693,1173073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.966 00:17:32.966 Run status group 0 (all jobs): 00:17:32.966 READ: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=4585MiB (4807MB), run=60002-60002msec 00:17:32.966 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=4582MiB (4805MB), run=60002-60002msec 00:17:32.966 00:17:32.966 Disk stats (read/write): 00:17:32.966 ublkb1: ios=1171289/1170583, merge=0/0, ticks=3685946/3639338, in_queue=7325285, util=99.93% 00:17:32.966 05:06:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 [2024-07-24 05:06:40.205339] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:32.966 [2024-07-24 05:06:40.249018] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:32.966 [2024-07-24 05:06:40.252939] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:32.966 [2024-07-24 05:06:40.261981] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:32.966 [2024-07-24 05:06:40.262159] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:32.966 [2024-07-24 05:06:40.262196] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.966 05:06:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 [2024-07-24 05:06:40.271034] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:32.966 [2024-07-24 05:06:40.277937] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:32.966 [2024-07-24 05:06:40.277976] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.966 05:06:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:32.966 05:06:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:32.966 05:06:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77825 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 77825 ']' 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 77825 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77825 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.966 killing process with pid 77825 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77825' 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@967 -- # kill 77825 00:17:32.966 05:06:40 ublk_recovery -- common/autotest_common.sh@972 -- # wait 77825 00:17:32.966 [2024-07-24 05:06:41.165156] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:32.966 [2024-07-24 05:06:41.165230] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:32.966 00:17:32.966 real 1m4.767s 00:17:32.966 user 1m47.555s 00:17:32.966 sys 0m30.193s 00:17:32.966 05:06:42 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.966 05:06:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 ************************************ 00:17:32.966 END TEST ublk_recovery 00:17:32.966 ************************************ 00:17:32.966 05:06:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:32.966 05:06:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.966 05:06:42 -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 05:06:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:17:32.966 05:06:42 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:32.966 05:06:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:32.966 05:06:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.966 05:06:42 -- common/autotest_common.sh@10 -- # set +x 00:17:32.966 ************************************ 00:17:32.966 START TEST ftl 00:17:32.966 ************************************ 00:17:32.966 05:06:42 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:32.966 * Looking for test storage... 00:17:32.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.966 05:06:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:32.966 05:06:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:32.966 05:06:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.966 05:06:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.966 05:06:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:32.966 05:06:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:32.966 05:06:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:32.966 05:06:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.966 05:06:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.966 05:06:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:32.966 05:06:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:32.966 05:06:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:32.966 05:06:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:32.966 05:06:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.966 05:06:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.966 05:06:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:32.966 05:06:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:32.966 05:06:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:32.966 05:06:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:32.966 05:06:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:32.966 05:06:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:32.966 05:06:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:32.966 05:06:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:32.966 05:06:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:32.966 05:06:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:32.966 05:06:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:32.966 05:06:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:32.966 05:06:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:32.967 05:06:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:32.967 05:06:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:32.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:32.967 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:32.967 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:32.967 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:32.967 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:32.967 05:06:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78607 00:17:32.967 05:06:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:32.967 05:06:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78607 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@829 -- # '[' -z 78607 ']' 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.967 05:06:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:32.967 [2024-07-24 05:06:43.167476] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:17:32.967 [2024-07-24 05:06:43.167665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78607 ] 00:17:32.967 [2024-07-24 05:06:43.341690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.967 [2024-07-24 05:06:43.570470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.967 05:06:44 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.967 05:06:44 ftl -- common/autotest_common.sh@862 -- # return 0 00:17:32.967 05:06:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:32.967 05:06:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@50 -- # break 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:32.967 05:06:45 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:32.967 05:06:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:32.967 05:06:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:32.967 05:06:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:32.967 05:06:46 ftl -- ftl/ftl.sh@63 -- # break 00:17:32.967 05:06:46 ftl -- ftl/ftl.sh@66 -- # killprocess 78607 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@948 -- # '[' -z 78607 ']' 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@952 -- # kill -0 78607 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@953 -- # uname 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78607 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.967 killing process with pid 78607 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78607' 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@967 -- # kill 78607 00:17:32.967 05:06:46 ftl -- common/autotest_common.sh@972 -- # wait 78607 00:17:33.534 05:06:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:33.534 05:06:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:33.534 05:06:47 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:33.534 05:06:47 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.534 05:06:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:33.534 ************************************ 00:17:33.534 START TEST ftl_fio_basic 00:17:33.534 ************************************ 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:33.534 * Looking for test storage... 00:17:33.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78737 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78737 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 78737 ']' 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.534 05:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:33.793 [2024-07-24 05:06:48.248775] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:17:33.793 [2024-07-24 05:06:48.248998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78737 ] 00:17:33.793 [2024-07-24 05:06:48.418038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.052 [2024-07-24 05:06:48.587515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.052 [2024-07-24 05:06:48.587643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.052 [2024-07-24 05:06:48.587664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:34.620 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_info 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bs 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local nb 00:17:35.188 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:17:35.447 { 00:17:35.447 "name": "nvme0n1", 00:17:35.447 "aliases": [ 00:17:35.447 "36a225ba-88e9-4aca-a3c7-2520b4ed2b57" 00:17:35.447 ], 00:17:35.447 "product_name": "NVMe disk", 00:17:35.447 "block_size": 4096, 00:17:35.447 "num_blocks": 1310720, 00:17:35.447 "uuid": "36a225ba-88e9-4aca-a3c7-2520b4ed2b57", 00:17:35.447 "assigned_rate_limits": { 00:17:35.447 "rw_ios_per_sec": 0, 00:17:35.447 "rw_mbytes_per_sec": 0, 00:17:35.447 "r_mbytes_per_sec": 0, 00:17:35.447 "w_mbytes_per_sec": 0 00:17:35.447 }, 00:17:35.447 "claimed": false, 00:17:35.447 "zoned": false, 00:17:35.447 "supported_io_types": { 00:17:35.447 "read": true, 00:17:35.447 "write": true, 00:17:35.447 "unmap": true, 00:17:35.447 "flush": true, 00:17:35.447 "reset": true, 00:17:35.447 "nvme_admin": true, 00:17:35.447 "nvme_io": true, 00:17:35.447 "nvme_io_md": false, 00:17:35.447 "write_zeroes": true, 00:17:35.447 "zcopy": false, 00:17:35.447 "get_zone_info": false, 00:17:35.447 "zone_management": false, 00:17:35.447 "zone_append": false, 00:17:35.447 "compare": true, 00:17:35.447 "compare_and_write": false, 00:17:35.447 "abort": true, 00:17:35.447 "seek_hole": false, 00:17:35.447 "seek_data": false, 00:17:35.447 "copy": true, 00:17:35.447 "nvme_iov_md": false 00:17:35.447 }, 00:17:35.447 "driver_specific": { 00:17:35.447 "nvme": [ 00:17:35.447 { 00:17:35.447 "pci_address": "0000:00:11.0", 00:17:35.447 "trid": { 00:17:35.447 "trtype": "PCIe", 00:17:35.447 "traddr": "0000:00:11.0" 00:17:35.447 }, 00:17:35.447 "ctrlr_data": { 00:17:35.447 "cntlid": 0, 00:17:35.447 "vendor_id": "0x1b36", 00:17:35.447 "model_number": "QEMU NVMe Ctrl", 00:17:35.447 "serial_number": "12341", 00:17:35.447 "firmware_revision": "8.0.0", 00:17:35.447 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:35.447 "oacs": { 00:17:35.447 "security": 0, 00:17:35.447 "format": 1, 00:17:35.447 "firmware": 0, 00:17:35.447 "ns_manage": 1 00:17:35.447 }, 00:17:35.447 "multi_ctrlr": false, 00:17:35.447 "ana_reporting": false 00:17:35.447 }, 00:17:35.447 "vs": { 00:17:35.447 "nvme_version": "1.4" 00:17:35.447 }, 00:17:35.447 "ns_data": { 00:17:35.447 "id": 1, 00:17:35.447 "can_share": false 00:17:35.447 } 00:17:35.447 } 00:17:35.447 ], 00:17:35.447 "mp_policy": "active_passive" 00:17:35.447 } 00:17:35.447 } 00:17:35.447 ]' 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bs=4096 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # nb=1310720 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # echo 5120 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:35.447 05:06:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:35.706 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:35.706 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:35.965 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=093b7c9d-c161-45c9-9974-7b80fe1156b9 00:17:35.965 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 093b7c9d-c161-45c9-9974-7b80fe1156b9 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bdev_name=1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_info 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bs 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local nb 00:17:36.224 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.483 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:17:36.483 { 00:17:36.483 "name": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:36.483 "aliases": [ 00:17:36.483 "lvs/nvme0n1p0" 00:17:36.483 ], 00:17:36.483 "product_name": "Logical Volume", 00:17:36.483 "block_size": 4096, 00:17:36.483 "num_blocks": 26476544, 00:17:36.483 "uuid": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:36.483 "assigned_rate_limits": { 00:17:36.483 "rw_ios_per_sec": 0, 00:17:36.483 "rw_mbytes_per_sec": 0, 00:17:36.483 "r_mbytes_per_sec": 0, 00:17:36.483 "w_mbytes_per_sec": 0 00:17:36.483 }, 00:17:36.483 "claimed": false, 00:17:36.483 "zoned": false, 00:17:36.483 "supported_io_types": { 00:17:36.483 "read": true, 00:17:36.483 "write": true, 00:17:36.483 "unmap": true, 00:17:36.483 "flush": false, 00:17:36.483 "reset": true, 00:17:36.483 "nvme_admin": false, 00:17:36.483 "nvme_io": false, 00:17:36.483 "nvme_io_md": false, 00:17:36.483 "write_zeroes": true, 00:17:36.483 "zcopy": false, 00:17:36.483 "get_zone_info": false, 00:17:36.483 "zone_management": false, 00:17:36.483 "zone_append": false, 00:17:36.483 "compare": false, 00:17:36.483 "compare_and_write": false, 00:17:36.483 "abort": false, 00:17:36.483 "seek_hole": true, 00:17:36.483 "seek_data": true, 00:17:36.483 "copy": false, 00:17:36.483 "nvme_iov_md": false 00:17:36.483 }, 00:17:36.483 "driver_specific": { 00:17:36.483 "lvol": { 00:17:36.483 "lvol_store_uuid": "093b7c9d-c161-45c9-9974-7b80fe1156b9", 00:17:36.483 "base_bdev": "nvme0n1", 00:17:36.483 "thin_provision": true, 00:17:36.483 "num_allocated_clusters": 0, 00:17:36.483 "snapshot": false, 00:17:36.483 "clone": false, 00:17:36.483 "esnap_clone": false 00:17:36.483 } 00:17:36.483 } 00:17:36.483 } 00:17:36.483 ]' 00:17:36.483 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:17:36.483 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bs=4096 00:17:36.483 05:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # nb=26476544 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # echo 103424 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:36.483 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bdev_name=1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_info 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bs 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local nb 00:17:36.742 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:37.001 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:17:37.001 { 00:17:37.001 "name": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:37.001 "aliases": [ 00:17:37.001 "lvs/nvme0n1p0" 00:17:37.001 ], 00:17:37.001 "product_name": "Logical Volume", 00:17:37.001 "block_size": 4096, 00:17:37.001 "num_blocks": 26476544, 00:17:37.001 "uuid": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:37.001 "assigned_rate_limits": { 00:17:37.001 "rw_ios_per_sec": 0, 00:17:37.001 "rw_mbytes_per_sec": 0, 00:17:37.001 "r_mbytes_per_sec": 0, 00:17:37.001 "w_mbytes_per_sec": 0 00:17:37.001 }, 00:17:37.001 "claimed": false, 00:17:37.001 "zoned": false, 00:17:37.001 "supported_io_types": { 00:17:37.001 "read": true, 00:17:37.001 "write": true, 00:17:37.001 "unmap": true, 00:17:37.001 "flush": false, 00:17:37.001 "reset": true, 00:17:37.001 "nvme_admin": false, 00:17:37.001 "nvme_io": false, 00:17:37.001 "nvme_io_md": false, 00:17:37.001 "write_zeroes": true, 00:17:37.001 "zcopy": false, 00:17:37.001 "get_zone_info": false, 00:17:37.001 "zone_management": false, 00:17:37.001 "zone_append": false, 00:17:37.001 "compare": false, 00:17:37.001 "compare_and_write": false, 00:17:37.001 "abort": false, 00:17:37.001 "seek_hole": true, 00:17:37.001 "seek_data": true, 00:17:37.001 "copy": false, 00:17:37.001 "nvme_iov_md": false 00:17:37.001 }, 00:17:37.001 "driver_specific": { 00:17:37.001 "lvol": { 00:17:37.001 "lvol_store_uuid": "093b7c9d-c161-45c9-9974-7b80fe1156b9", 00:17:37.001 "base_bdev": "nvme0n1", 00:17:37.001 "thin_provision": true, 00:17:37.001 "num_allocated_clusters": 0, 00:17:37.001 "snapshot": false, 00:17:37.001 "clone": false, 00:17:37.001 "esnap_clone": false 00:17:37.001 } 00:17:37.001 } 00:17:37.001 } 00:17:37.001 ]' 00:17:37.001 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:17:37.001 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bs=4096 00:17:37.001 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # nb=26476544 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # echo 103424 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:37.261 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bdev_name=1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_info 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bs 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local nb 00:17:37.261 05:06:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d6f02d2-ae2d-4130-90cb-7f250b32040a 00:17:37.526 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:17:37.526 { 00:17:37.526 "name": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:37.526 "aliases": [ 00:17:37.526 "lvs/nvme0n1p0" 00:17:37.526 ], 00:17:37.526 "product_name": "Logical Volume", 00:17:37.526 "block_size": 4096, 00:17:37.526 "num_blocks": 26476544, 00:17:37.526 "uuid": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:37.526 "assigned_rate_limits": { 00:17:37.526 "rw_ios_per_sec": 0, 00:17:37.526 "rw_mbytes_per_sec": 0, 00:17:37.526 "r_mbytes_per_sec": 0, 00:17:37.527 "w_mbytes_per_sec": 0 00:17:37.527 }, 00:17:37.527 "claimed": false, 00:17:37.527 "zoned": false, 00:17:37.527 "supported_io_types": { 00:17:37.527 "read": true, 00:17:37.527 "write": true, 00:17:37.527 "unmap": true, 00:17:37.527 "flush": false, 00:17:37.527 "reset": true, 00:17:37.527 "nvme_admin": false, 00:17:37.527 "nvme_io": false, 00:17:37.527 "nvme_io_md": false, 00:17:37.527 "write_zeroes": true, 00:17:37.527 "zcopy": false, 00:17:37.527 "get_zone_info": false, 00:17:37.527 "zone_management": false, 00:17:37.527 "zone_append": false, 00:17:37.527 "compare": false, 00:17:37.527 "compare_and_write": false, 00:17:37.527 "abort": false, 00:17:37.527 "seek_hole": true, 00:17:37.527 "seek_data": true, 00:17:37.527 "copy": false, 00:17:37.527 "nvme_iov_md": false 00:17:37.527 }, 00:17:37.527 "driver_specific": { 00:17:37.527 "lvol": { 00:17:37.527 "lvol_store_uuid": "093b7c9d-c161-45c9-9974-7b80fe1156b9", 00:17:37.527 "base_bdev": "nvme0n1", 00:17:37.527 "thin_provision": true, 00:17:37.527 "num_allocated_clusters": 0, 00:17:37.527 "snapshot": false, 00:17:37.527 "clone": false, 00:17:37.527 "esnap_clone": false 00:17:37.527 } 00:17:37.527 } 00:17:37.527 } 00:17:37.527 ]' 00:17:37.527 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bs=4096 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # nb=26476544 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # echo 103424 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:37.785 05:06:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1d6f02d2-ae2d-4130-90cb-7f250b32040a -c nvc0n1p0 --l2p_dram_limit 60 00:17:38.045 [2024-07-24 05:06:52.444493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.445037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:38.045 [2024-07-24 05:06:52.445158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:38.045 [2024-07-24 05:06:52.445246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.445441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.445555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.045 [2024-07-24 05:06:52.445631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:38.045 [2024-07-24 05:06:52.445706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.445753] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:38.045 [2024-07-24 05:06:52.446772] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:38.045 [2024-07-24 05:06:52.446823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.446843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.045 [2024-07-24 05:06:52.446905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:17:38.045 [2024-07-24 05:06:52.446921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.447054] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 79781938-64aa-48ad-b24e-2244f1db1747 00:17:38.045 [2024-07-24 05:06:52.448138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.448193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:38.045 [2024-07-24 05:06:52.448227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:38.045 [2024-07-24 05:06:52.448239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.452622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.452685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.045 [2024-07-24 05:06:52.452724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:17:38.045 [2024-07-24 05:06:52.452735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.452909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.452929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.045 [2024-07-24 05:06:52.452945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:38.045 [2024-07-24 05:06:52.452956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.453079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.453097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:38.045 [2024-07-24 05:06:52.453113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:38.045 [2024-07-24 05:06:52.453128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.453186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.045 [2024-07-24 05:06:52.457702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.457763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.045 [2024-07-24 05:06:52.457795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.535 ms 00:17:38.045 [2024-07-24 05:06:52.457809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.457887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.457923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:38.045 [2024-07-24 05:06:52.457936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:38.045 [2024-07-24 05:06:52.457949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.458015] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:38.045 [2024-07-24 05:06:52.458243] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:38.045 [2024-07-24 05:06:52.458267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:38.045 [2024-07-24 05:06:52.458289] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:38.045 [2024-07-24 05:06:52.458306] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:38.045 [2024-07-24 05:06:52.458325] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:38.045 [2024-07-24 05:06:52.458340] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:38.045 [2024-07-24 05:06:52.458354] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:38.045 [2024-07-24 05:06:52.458368] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:38.045 [2024-07-24 05:06:52.458382] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:38.045 [2024-07-24 05:06:52.458395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.458408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:38.045 [2024-07-24 05:06:52.458421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:17:38.045 [2024-07-24 05:06:52.458434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.458536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.045 [2024-07-24 05:06:52.458554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:38.045 [2024-07-24 05:06:52.458567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:38.045 [2024-07-24 05:06:52.458582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.045 [2024-07-24 05:06:52.458720] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:38.045 [2024-07-24 05:06:52.458751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:38.045 [2024-07-24 05:06:52.458766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.045 [2024-07-24 05:06:52.458781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.045 [2024-07-24 05:06:52.458794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:38.045 [2024-07-24 05:06:52.458807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:38.045 [2024-07-24 05:06:52.458819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:38.045 [2024-07-24 05:06:52.458833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:38.045 [2024-07-24 05:06:52.458859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:38.045 [2024-07-24 05:06:52.458875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.045 [2024-07-24 05:06:52.458887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:38.045 [2024-07-24 05:06:52.458906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:38.045 [2024-07-24 05:06:52.458918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.045 [2024-07-24 05:06:52.458932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:38.045 [2024-07-24 05:06:52.458944] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:38.045 [2024-07-24 05:06:52.458957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.045 [2024-07-24 05:06:52.458968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:38.045 [2024-07-24 05:06:52.458984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:38.046 [2024-07-24 05:06:52.458995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:38.046 [2024-07-24 05:06:52.459020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:38.046 [2024-07-24 05:06:52.459058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:38.046 [2024-07-24 05:06:52.459094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:38.046 [2024-07-24 05:06:52.459132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:38.046 [2024-07-24 05:06:52.459168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.046 [2024-07-24 05:06:52.459195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:38.046 [2024-07-24 05:06:52.459208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:38.046 [2024-07-24 05:06:52.459219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.046 [2024-07-24 05:06:52.459246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:38.046 [2024-07-24 05:06:52.459258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:38.046 [2024-07-24 05:06:52.459272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:38.046 [2024-07-24 05:06:52.459296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:38.046 [2024-07-24 05:06:52.459307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459322] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:38.046 [2024-07-24 05:06:52.459335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:38.046 [2024-07-24 05:06:52.459369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.046 [2024-07-24 05:06:52.459396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:38.046 [2024-07-24 05:06:52.459408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:38.046 [2024-07-24 05:06:52.459423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:38.046 [2024-07-24 05:06:52.459435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:38.046 [2024-07-24 05:06:52.459448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:38.046 [2024-07-24 05:06:52.459460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:38.046 [2024-07-24 05:06:52.459478] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:38.046 [2024-07-24 05:06:52.459493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:38.046 [2024-07-24 05:06:52.459526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:38.046 [2024-07-24 05:06:52.459540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:38.046 [2024-07-24 05:06:52.459552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:38.046 [2024-07-24 05:06:52.459568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:38.046 [2024-07-24 05:06:52.459580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:38.046 [2024-07-24 05:06:52.459594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:38.046 [2024-07-24 05:06:52.459606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:38.046 [2024-07-24 05:06:52.459621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:38.046 [2024-07-24 05:06:52.459633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:38.046 [2024-07-24 05:06:52.459705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:38.046 [2024-07-24 05:06:52.459718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:38.046 [2024-07-24 05:06:52.459746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:38.046 [2024-07-24 05:06:52.459760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:38.046 [2024-07-24 05:06:52.459772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:38.046 [2024-07-24 05:06:52.459791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.046 [2024-07-24 05:06:52.459804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:38.046 [2024-07-24 05:06:52.459819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:17:38.046 [2024-07-24 05:06:52.459831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.046 [2024-07-24 05:06:52.459932] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:38.046 [2024-07-24 05:06:52.459951] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:40.579 [2024-07-24 05:06:55.031867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.031957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:40.579 [2024-07-24 05:06:55.031997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2571.942 ms 00:17:40.579 [2024-07-24 05:06:55.032010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.060933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.061005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.579 [2024-07-24 05:06:55.061044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.572 ms 00:17:40.579 [2024-07-24 05:06:55.061057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.061233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.061252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.579 [2024-07-24 05:06:55.061267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:40.579 [2024-07-24 05:06:55.061281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.107272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.107342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.579 [2024-07-24 05:06:55.107370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.867 ms 00:17:40.579 [2024-07-24 05:06:55.107386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.107469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.107488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.579 [2024-07-24 05:06:55.107510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.579 [2024-07-24 05:06:55.107525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.108042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.108080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.579 [2024-07-24 05:06:55.108101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:40.579 [2024-07-24 05:06:55.108116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.108317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.108348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.579 [2024-07-24 05:06:55.108368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:40.579 [2024-07-24 05:06:55.108383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.126221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.126282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.579 [2024-07-24 05:06:55.126318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.769 ms 00:17:40.579 [2024-07-24 05:06:55.126331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.138403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.579 [2024-07-24 05:06:55.151670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.151790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.579 [2024-07-24 05:06:55.151825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.213 ms 00:17:40.579 [2024-07-24 05:06:55.151839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.204846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.204942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:40.579 [2024-07-24 05:06:55.204964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.936 ms 00:17:40.579 [2024-07-24 05:06:55.204978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.579 [2024-07-24 05:06:55.205259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.579 [2024-07-24 05:06:55.205287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.579 [2024-07-24 05:06:55.205302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:40.579 [2024-07-24 05:06:55.205319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.233295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.233367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:40.838 [2024-07-24 05:06:55.233400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.898 ms 00:17:40.838 [2024-07-24 05:06:55.233414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.260464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.260522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:40.838 [2024-07-24 05:06:55.260555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.002 ms 00:17:40.838 [2024-07-24 05:06:55.260568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.261347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.261427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.838 [2024-07-24 05:06:55.261458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:17:40.838 [2024-07-24 05:06:55.261471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.360889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.360982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:40.838 [2024-07-24 05:06:55.361003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.346 ms 00:17:40.838 [2024-07-24 05:06:55.361021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.394377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.394458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:40.838 [2024-07-24 05:06:55.394478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.315 ms 00:17:40.838 [2024-07-24 05:06:55.394493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.423406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.423473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:40.838 [2024-07-24 05:06:55.423491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.870 ms 00:17:40.838 [2024-07-24 05:06:55.423506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.453308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.453406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.838 [2024-07-24 05:06:55.453428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.760 ms 00:17:40.838 [2024-07-24 05:06:55.453441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.453492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.453510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.838 [2024-07-24 05:06:55.453524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:40.838 [2024-07-24 05:06:55.453540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.453678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.838 [2024-07-24 05:06:55.453717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.838 [2024-07-24 05:06:55.453731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:40.838 [2024-07-24 05:06:55.453745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.838 [2024-07-24 05:06:55.454948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3009.933 ms, result 0 00:17:40.838 { 00:17:40.838 "name": "ftl0", 00:17:40.838 "uuid": "79781938-64aa-48ad-b24e-2244f1db1747" 00:17:40.838 } 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.096 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.097 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:41.355 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:41.355 [ 00:17:41.355 { 00:17:41.355 "name": "ftl0", 00:17:41.355 "aliases": [ 00:17:41.355 "79781938-64aa-48ad-b24e-2244f1db1747" 00:17:41.355 ], 00:17:41.355 "product_name": "FTL disk", 00:17:41.355 "block_size": 4096, 00:17:41.355 "num_blocks": 20971520, 00:17:41.355 "uuid": "79781938-64aa-48ad-b24e-2244f1db1747", 00:17:41.355 "assigned_rate_limits": { 00:17:41.355 "rw_ios_per_sec": 0, 00:17:41.355 "rw_mbytes_per_sec": 0, 00:17:41.355 "r_mbytes_per_sec": 0, 00:17:41.355 "w_mbytes_per_sec": 0 00:17:41.355 }, 00:17:41.355 "claimed": false, 00:17:41.355 "zoned": false, 00:17:41.355 "supported_io_types": { 00:17:41.355 "read": true, 00:17:41.355 "write": true, 00:17:41.355 "unmap": true, 00:17:41.355 "flush": true, 00:17:41.355 "reset": false, 00:17:41.355 "nvme_admin": false, 00:17:41.355 "nvme_io": false, 00:17:41.355 "nvme_io_md": false, 00:17:41.355 "write_zeroes": true, 00:17:41.355 "zcopy": false, 00:17:41.355 "get_zone_info": false, 00:17:41.355 "zone_management": false, 00:17:41.355 "zone_append": false, 00:17:41.355 "compare": false, 00:17:41.355 "compare_and_write": false, 00:17:41.355 "abort": false, 00:17:41.355 "seek_hole": false, 00:17:41.355 "seek_data": false, 00:17:41.355 "copy": false, 00:17:41.355 "nvme_iov_md": false 00:17:41.355 }, 00:17:41.355 "driver_specific": { 00:17:41.355 "ftl": { 00:17:41.355 "base_bdev": "1d6f02d2-ae2d-4130-90cb-7f250b32040a", 00:17:41.355 "cache": "nvc0n1p0" 00:17:41.355 } 00:17:41.355 } 00:17:41.355 } 00:17:41.355 ] 00:17:41.355 05:06:55 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:17:41.355 05:06:55 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:41.355 05:06:55 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:41.614 05:06:56 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:41.614 05:06:56 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:41.873 [2024-07-24 05:06:56.395901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.395969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.873 [2024-07-24 05:06:56.396011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.873 [2024-07-24 05:06:56.396022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.396070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.873 [2024-07-24 05:06:56.399438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.399490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.873 [2024-07-24 05:06:56.399507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:17:41.873 [2024-07-24 05:06:56.399521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.400124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.400162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.873 [2024-07-24 05:06:56.400178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:41.873 [2024-07-24 05:06:56.400194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.403473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.403509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.873 [2024-07-24 05:06:56.403525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:17:41.873 [2024-07-24 05:06:56.403540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.409925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.409974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.873 [2024-07-24 05:06:56.409987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.353 ms 00:17:41.873 [2024-07-24 05:06:56.410005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.439421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.439499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.873 [2024-07-24 05:06:56.439521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.327 ms 00:17:41.873 [2024-07-24 05:06:56.439535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.457480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.457550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.873 [2024-07-24 05:06:56.457568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.846 ms 00:17:41.873 [2024-07-24 05:06:56.457583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.457888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.457930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.873 [2024-07-24 05:06:56.457948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:41.873 [2024-07-24 05:06:56.457963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.873 [2024-07-24 05:06:56.486032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.873 [2024-07-24 05:06:56.486091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:41.873 [2024-07-24 05:06:56.486107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.024 ms 00:17:41.873 [2024-07-24 05:06:56.486120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.133 [2024-07-24 05:06:56.516893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.133 [2024-07-24 05:06:56.516975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:42.133 [2024-07-24 05:06:56.516994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.722 ms 00:17:42.133 [2024-07-24 05:06:56.517008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.133 [2024-07-24 05:06:56.545864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.133 [2024-07-24 05:06:56.545951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.133 [2024-07-24 05:06:56.545971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.804 ms 00:17:42.133 [2024-07-24 05:06:56.545984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.133 [2024-07-24 05:06:56.574272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.133 [2024-07-24 05:06:56.574349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.133 [2024-07-24 05:06:56.574369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.150 ms 00:17:42.133 [2024-07-24 05:06:56.574383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.133 [2024-07-24 05:06:56.574435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.133 [2024-07-24 05:06:56.574462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.133 [2024-07-24 05:06:56.574685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.574992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.134 [2024-07-24 05:06:56.575936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.134 [2024-07-24 05:06:56.575963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79781938-64aa-48ad-b24e-2244f1db1747 00:17:42.134 [2024-07-24 05:06:56.575976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.134 [2024-07-24 05:06:56.576005] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.134 [2024-07-24 05:06:56.576021] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.134 [2024-07-24 05:06:56.576033] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.135 [2024-07-24 05:06:56.576046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.135 [2024-07-24 05:06:56.576057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.135 [2024-07-24 05:06:56.576070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.135 [2024-07-24 05:06:56.576080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.135 [2024-07-24 05:06:56.576092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.135 [2024-07-24 05:06:56.576104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-07-24 05:06:56.576118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.135 [2024-07-24 05:06:56.576130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:17:42.135 [2024-07-24 05:06:56.576143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.591079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-07-24 05:06:56.591135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.135 [2024-07-24 05:06:56.591151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.867 ms 00:17:42.135 [2024-07-24 05:06:56.591165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.591680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-07-24 05:06:56.591716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.135 [2024-07-24 05:06:56.591731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:17:42.135 [2024-07-24 05:06:56.591746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.643907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.135 [2024-07-24 05:06:56.643981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.135 [2024-07-24 05:06:56.644001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.135 [2024-07-24 05:06:56.644015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.644100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.135 [2024-07-24 05:06:56.644118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.135 [2024-07-24 05:06:56.644129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.135 [2024-07-24 05:06:56.644142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.644312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.135 [2024-07-24 05:06:56.644338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.135 [2024-07-24 05:06:56.644351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.135 [2024-07-24 05:06:56.644365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.644396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.135 [2024-07-24 05:06:56.644416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.135 [2024-07-24 05:06:56.644428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.135 [2024-07-24 05:06:56.644441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-07-24 05:06:56.739779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.135 [2024-07-24 05:06:56.739868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.135 [2024-07-24 05:06:56.739888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.135 [2024-07-24 05:06:56.739902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.811765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.811858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.394 [2024-07-24 05:06:56.811895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.811909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.394 [2024-07-24 05:06:56.812076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.394 [2024-07-24 05:06:56.812236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.394 [2024-07-24 05:06:56.812432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.394 [2024-07-24 05:06:56.812542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.394 [2024-07-24 05:06:56.812637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.394 [2024-07-24 05:06:56.812732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.394 [2024-07-24 05:06:56.812745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.394 [2024-07-24 05:06:56.812758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.394 [2024-07-24 05:06:56.812948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.046 ms, result 0 00:17:42.394 true 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78737 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 78737 ']' 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 78737 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78737 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.394 killing process with pid 78737 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78737' 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 78737 00:17:42.394 05:06:56 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 78737 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local sanitizers 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # shift 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local asan_lib= 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # grep libasan 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # break 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.586 05:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.586 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:46.586 fio-3.35 00:17:46.586 Starting 1 thread 00:17:53.142 00:17:53.142 test: (groupid=0, jobs=1): err= 0: pid=78940: Wed Jul 24 05:07:06 2024 00:17:53.142 read: IOPS=918, BW=61.0MiB/s (64.0MB/s)(255MiB/4173msec) 00:17:53.142 slat (nsec): min=5178, max=96601, avg=7159.26, stdev=3669.42 00:17:53.142 clat (usec): min=357, max=747, avg=486.66, stdev=50.40 00:17:53.142 lat (usec): min=366, max=760, avg=493.82, stdev=51.29 00:17:53.142 clat percentiles (usec): 00:17:53.142 | 1.00th=[ 388], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 449], 00:17:53.142 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 486], 00:17:53.142 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 562], 95.00th=[ 586], 00:17:53.142 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 734], 00:17:53.142 | 99.99th=[ 750] 00:17:53.142 write: IOPS=925, BW=61.4MiB/s (64.4MB/s)(256MiB/4168msec); 0 zone resets 00:17:53.142 slat (nsec): min=17858, max=98359, avg=23849.91, stdev=6483.60 00:17:53.142 clat (usec): min=382, max=1046, avg=553.32, stdev=65.25 00:17:53.142 lat (usec): min=403, max=1068, avg=577.17, stdev=65.45 00:17:53.142 clat percentiles (usec): 00:17:53.142 | 1.00th=[ 445], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 502], 00:17:53.142 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 562], 00:17:53.142 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 627], 95.00th=[ 652], 00:17:53.142 | 99.00th=[ 832], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 996], 00:17:53.142 | 99.99th=[ 1045] 00:17:53.142 bw ( KiB/s): min=60792, max=63784, per=99.98%, avg=62900.00, stdev=983.40, samples=8 00:17:53.142 iops : min= 894, max= 938, avg=925.00, stdev=14.46, samples=8 00:17:53.142 lat (usec) : 500=44.34%, 750=54.87%, 1000=0.78% 00:17:53.142 lat (msec) : 2=0.01% 00:17:53.142 cpu : usr=99.21%, sys=0.10%, ctx=7, majf=0, minf=1171 00:17:53.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:53.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.142 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:53.142 00:17:53.142 Run status group 0 (all jobs): 00:17:53.142 READ: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=255MiB (267MB), run=4173-4173msec 00:17:53.142 WRITE: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=256MiB (269MB), run=4168-4168msec 00:17:53.400 ----------------------------------------------------- 00:17:53.400 Suppressions used: 00:17:53.400 count bytes template 00:17:53.400 1 5 /usr/src/fio/parse.c 00:17:53.400 1 8 libtcmalloc_minimal.so 00:17:53.400 1 904 libcrypto.so 00:17:53.400 ----------------------------------------------------- 00:17:53.401 00:17:53.401 05:07:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:53.401 05:07:07 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.401 05:07:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local sanitizers 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # shift 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local asan_lib= 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # grep libasan 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # break 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:53.659 05:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:53.659 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:53.659 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:53.659 fio-3.35 00:17:53.659 Starting 2 threads 00:18:25.735 00:18:25.735 first_half: (groupid=0, jobs=1): err= 0: pid=79043: Wed Jul 24 05:07:37 2024 00:18:25.735 read: IOPS=2308, BW=9234KiB/s (9455kB/s)(256MiB/28364msec) 00:18:25.735 slat (usec): min=4, max=161, avg= 7.68, stdev= 2.83 00:18:25.735 clat (usec): min=921, max=314490, avg=47158.04, stdev=27270.88 00:18:25.735 lat (usec): min=926, max=314497, avg=47165.72, stdev=27271.11 00:18:25.735 clat percentiles (msec): 00:18:25.735 | 1.00th=[ 12], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:18:25.735 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:18:25.735 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 50], 95.00th=[ 88], 00:18:25.735 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 232], 99.95th=[ 275], 00:18:25.735 | 99.99th=[ 305] 00:18:25.735 write: IOPS=2314, BW=9258KiB/s (9480kB/s)(256MiB/28316msec); 0 zone resets 00:18:25.735 slat (usec): min=5, max=107, avg= 8.81, stdev= 5.23 00:18:25.735 clat (usec): min=474, max=53383, avg=8247.33, stdev=8132.70 00:18:25.735 lat (usec): min=486, max=53405, avg=8256.14, stdev=8132.87 00:18:25.735 clat percentiles (usec): 00:18:25.735 | 1.00th=[ 1074], 5.00th=[ 1532], 10.00th=[ 1909], 20.00th=[ 3458], 00:18:25.735 | 30.00th=[ 4424], 40.00th=[ 5538], 50.00th=[ 6259], 60.00th=[ 7177], 00:18:25.735 | 70.00th=[ 7898], 80.00th=[ 9503], 90.00th=[15926], 95.00th=[24249], 00:18:25.735 | 99.00th=[43254], 99.50th=[44827], 99.90th=[51119], 99.95th=[51643], 00:18:25.735 | 99.99th=[52691] 00:18:25.735 bw ( KiB/s): min= 1619, max=47832, per=100.00%, avg=20823.64, stdev=14763.00, samples=25 00:18:25.735 iops : min= 404, max=11958, avg=5205.88, stdev=3690.79, samples=25 00:18:25.735 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.28% 00:18:25.735 lat (msec) : 2=5.14%, 4=7.23%, 10=28.07%, 20=8.01%, 50=46.56% 00:18:25.735 lat (msec) : 100=2.40%, 250=2.22%, 500=0.04% 00:18:25.735 cpu : usr=98.97%, sys=0.36%, ctx=42, majf=0, minf=5532 00:18:25.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:25.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.735 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.735 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.735 second_half: (groupid=0, jobs=1): err= 0: pid=79044: Wed Jul 24 05:07:37 2024 00:18:25.735 read: IOPS=2329, BW=9318KiB/s (9541kB/s)(256MiB/28114msec) 00:18:25.735 slat (usec): min=4, max=144, avg= 7.69, stdev= 2.92 00:18:25.735 clat (msec): min=12, max=226, avg=47.67, stdev=24.71 00:18:25.735 lat (msec): min=12, max=226, avg=47.67, stdev=24.71 00:18:25.735 clat percentiles (msec): 00:18:25.735 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:18:25.735 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:18:25.735 | 70.00th=[ 43], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 85], 00:18:25.735 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 218], 99.95th=[ 222], 00:18:25.735 | 99.99th=[ 226] 00:18:25.735 write: IOPS=2344, BW=9376KiB/s (9601kB/s)(256MiB/27959msec); 0 zone resets 00:18:25.735 slat (usec): min=5, max=2034, avg= 8.72, stdev= 9.99 00:18:25.735 clat (usec): min=504, max=43048, avg=7251.69, stdev=4630.78 00:18:25.735 lat (usec): min=517, max=43056, avg=7260.41, stdev=4631.11 00:18:25.735 clat percentiles (usec): 00:18:25.735 | 1.00th=[ 1237], 5.00th=[ 2089], 10.00th=[ 2933], 20.00th=[ 3884], 00:18:25.735 | 30.00th=[ 4817], 40.00th=[ 5604], 50.00th=[ 6194], 60.00th=[ 7046], 00:18:25.735 | 70.00th=[ 7504], 80.00th=[ 9110], 90.00th=[14091], 95.00th=[16057], 00:18:25.735 | 99.00th=[23987], 99.50th=[31851], 99.90th=[38536], 99.95th=[41157], 00:18:25.735 | 99.99th=[42206] 00:18:25.735 bw ( KiB/s): min= 600, max=47584, per=100.00%, avg=21846.00, stdev=14319.12, samples=24 00:18:25.735 iops : min= 150, max=11896, avg=5461.50, stdev=3579.78, samples=24 00:18:25.735 lat (usec) : 750=0.05%, 1000=0.15% 00:18:25.735 lat (msec) : 2=2.06%, 4=8.57%, 10=30.17%, 20=8.40%, 50=45.67% 00:18:25.735 lat (msec) : 100=2.83%, 250=2.10% 00:18:25.735 cpu : usr=98.78%, sys=0.46%, ctx=74, majf=0, minf=5587 00:18:25.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:25.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.735 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.735 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.735 00:18:25.735 Run status group 0 (all jobs): 00:18:25.735 READ: bw=18.0MiB/s (18.9MB/s), 9234KiB/s-9318KiB/s (9455kB/s-9541kB/s), io=512MiB (536MB), run=28114-28364msec 00:18:25.735 WRITE: bw=18.1MiB/s (19.0MB/s), 9258KiB/s-9376KiB/s (9480kB/s-9601kB/s), io=512MiB (537MB), run=27959-28316msec 00:18:25.735 ----------------------------------------------------- 00:18:25.735 Suppressions used: 00:18:25.735 count bytes template 00:18:25.735 2 10 /usr/src/fio/parse.c 00:18:25.735 3 288 /usr/src/fio/iolog.c 00:18:25.735 1 8 libtcmalloc_minimal.so 00:18:25.735 1 904 libcrypto.so 00:18:25.735 ----------------------------------------------------- 00:18:25.735 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local sanitizers 00:18:25.735 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # shift 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local asan_lib= 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # grep libasan 00:18:25.736 05:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:18:25.736 05:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.736 05:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.736 05:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # break 00:18:25.736 05:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.736 05:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.736 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.736 fio-3.35 00:18:25.736 Starting 1 thread 00:18:43.830 00:18:43.830 test: (groupid=0, jobs=1): err= 0: pid=79395: Wed Jul 24 05:07:57 2024 00:18:43.830 read: IOPS=6114, BW=23.9MiB/s (25.0MB/s)(255MiB/10663msec) 00:18:43.830 slat (nsec): min=4351, max=47637, avg=6832.42, stdev=2729.28 00:18:43.830 clat (usec): min=912, max=40483, avg=20920.75, stdev=1368.69 00:18:43.830 lat (usec): min=917, max=40502, avg=20927.58, stdev=1368.77 00:18:43.830 clat percentiles (usec): 00:18:43.830 | 1.00th=[19268], 5.00th=[19530], 10.00th=[19792], 20.00th=[20055], 00:18:43.830 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:18:43.830 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21890], 95.00th=[22152], 00:18:43.830 | 99.00th=[26346], 99.50th=[28181], 99.90th=[30278], 99.95th=[35390], 00:18:43.830 | 99.99th=[40109] 00:18:43.830 write: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(256MiB/5831msec); 0 zone resets 00:18:43.830 slat (usec): min=5, max=214, avg= 9.73, stdev= 5.67 00:18:43.830 clat (usec): min=648, max=71806, avg=11330.37, stdev=14547.09 00:18:43.830 lat (usec): min=656, max=71813, avg=11340.10, stdev=14547.16 00:18:43.830 clat percentiles (usec): 00:18:43.830 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1532], 00:18:43.830 | 30.00th=[ 1762], 40.00th=[ 2278], 50.00th=[ 7111], 60.00th=[ 8029], 00:18:43.830 | 70.00th=[ 9372], 80.00th=[11994], 90.00th=[41157], 95.00th=[45876], 00:18:43.830 | 99.00th=[50594], 99.50th=[53740], 99.90th=[58983], 99.95th=[60556], 00:18:43.830 | 99.99th=[69731] 00:18:43.830 bw ( KiB/s): min=23360, max=65488, per=97.18%, avg=43690.67, stdev=12087.05, samples=12 00:18:43.830 iops : min= 5840, max=16372, avg=10922.67, stdev=3021.83, samples=12 00:18:43.830 lat (usec) : 750=0.01%, 1000=0.57% 00:18:43.830 lat (msec) : 2=17.86%, 4=2.51%, 10=16.00%, 20=12.91%, 50=49.52% 00:18:43.830 lat (msec) : 100=0.62% 00:18:43.830 cpu : usr=98.32%, sys=0.93%, ctx=22, majf=0, minf=5567 00:18:43.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:43.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.830 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.830 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.830 00:18:43.830 Run status group 0 (all jobs): 00:18:43.830 READ: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=255MiB (267MB), run=10663-10663msec 00:18:43.830 WRITE: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=256MiB (268MB), run=5831-5831msec 00:18:45.234 ----------------------------------------------------- 00:18:45.234 Suppressions used: 00:18:45.234 count bytes template 00:18:45.234 1 5 /usr/src/fio/parse.c 00:18:45.234 2 192 /usr/src/fio/iolog.c 00:18:45.234 1 8 libtcmalloc_minimal.so 00:18:45.234 1 904 libcrypto.so 00:18:45.234 ----------------------------------------------------- 00:18:45.234 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:45.234 Remove shared memory files 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62027 /dev/shm/spdk_tgt_trace.pid77683 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:45.234 00:18:45.234 real 1m11.731s 00:18:45.234 user 2m38.589s 00:18:45.234 sys 0m3.741s 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.234 05:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.234 ************************************ 00:18:45.234 END TEST ftl_fio_basic 00:18:45.234 ************************************ 00:18:45.234 05:07:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:45.234 05:07:59 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:45.234 05:07:59 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.234 05:07:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:45.234 ************************************ 00:18:45.234 START TEST ftl_bdevperf 00:18:45.234 ************************************ 00:18:45.234 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:45.234 * Looking for test storage... 00:18:45.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:45.493 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79651 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79651 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 79651 ']' 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.494 05:07:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 [2024-07-24 05:07:59.992496] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:18:45.494 [2024-07-24 05:07:59.992712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79651 ] 00:18:45.752 [2024-07-24 05:08:00.164463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.752 [2024-07-24 05:08:00.321156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:46.319 05:08:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_info 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bs 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local nb 00:18:46.887 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:18:47.145 { 00:18:47.145 "name": "nvme0n1", 00:18:47.145 "aliases": [ 00:18:47.145 "d5033d52-1699-4831-9c08-330d5302ed5e" 00:18:47.145 ], 00:18:47.145 "product_name": "NVMe disk", 00:18:47.145 "block_size": 4096, 00:18:47.145 "num_blocks": 1310720, 00:18:47.145 "uuid": "d5033d52-1699-4831-9c08-330d5302ed5e", 00:18:47.145 "assigned_rate_limits": { 00:18:47.145 "rw_ios_per_sec": 0, 00:18:47.145 "rw_mbytes_per_sec": 0, 00:18:47.145 "r_mbytes_per_sec": 0, 00:18:47.145 "w_mbytes_per_sec": 0 00:18:47.145 }, 00:18:47.145 "claimed": true, 00:18:47.145 "claim_type": "read_many_write_one", 00:18:47.145 "zoned": false, 00:18:47.145 "supported_io_types": { 00:18:47.145 "read": true, 00:18:47.145 "write": true, 00:18:47.145 "unmap": true, 00:18:47.145 "flush": true, 00:18:47.145 "reset": true, 00:18:47.145 "nvme_admin": true, 00:18:47.145 "nvme_io": true, 00:18:47.145 "nvme_io_md": false, 00:18:47.145 "write_zeroes": true, 00:18:47.145 "zcopy": false, 00:18:47.145 "get_zone_info": false, 00:18:47.145 "zone_management": false, 00:18:47.145 "zone_append": false, 00:18:47.145 "compare": true, 00:18:47.145 "compare_and_write": false, 00:18:47.145 "abort": true, 00:18:47.145 "seek_hole": false, 00:18:47.145 "seek_data": false, 00:18:47.145 "copy": true, 00:18:47.145 "nvme_iov_md": false 00:18:47.145 }, 00:18:47.145 "driver_specific": { 00:18:47.145 "nvme": [ 00:18:47.145 { 00:18:47.145 "pci_address": "0000:00:11.0", 00:18:47.145 "trid": { 00:18:47.145 "trtype": "PCIe", 00:18:47.145 "traddr": "0000:00:11.0" 00:18:47.145 }, 00:18:47.145 "ctrlr_data": { 00:18:47.145 "cntlid": 0, 00:18:47.145 "vendor_id": "0x1b36", 00:18:47.145 "model_number": "QEMU NVMe Ctrl", 00:18:47.145 "serial_number": "12341", 00:18:47.145 "firmware_revision": "8.0.0", 00:18:47.145 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:47.145 "oacs": { 00:18:47.145 "security": 0, 00:18:47.145 "format": 1, 00:18:47.145 "firmware": 0, 00:18:47.145 "ns_manage": 1 00:18:47.145 }, 00:18:47.145 "multi_ctrlr": false, 00:18:47.145 "ana_reporting": false 00:18:47.145 }, 00:18:47.145 "vs": { 00:18:47.145 "nvme_version": "1.4" 00:18:47.145 }, 00:18:47.145 "ns_data": { 00:18:47.145 "id": 1, 00:18:47.145 "can_share": false 00:18:47.145 } 00:18:47.145 } 00:18:47.145 ], 00:18:47.145 "mp_policy": "active_passive" 00:18:47.145 } 00:18:47.145 } 00:18:47.145 ]' 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bs=4096 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # nb=1310720 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # echo 5120 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:47.145 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:47.403 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=093b7c9d-c161-45c9-9974-7b80fe1156b9 00:18:47.403 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:47.403 05:08:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 093b7c9d-c161-45c9-9974-7b80fe1156b9 00:18:47.661 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:47.919 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=819eefc8-cac8-4056-85d7-14956f4028a8 00:18:47.919 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 819eefc8-cac8-4056-85d7-14956f4028a8 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bdev_name=80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_info 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bs 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local nb 00:18:48.178 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:18:48.437 { 00:18:48.437 "name": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:48.437 "aliases": [ 00:18:48.437 "lvs/nvme0n1p0" 00:18:48.437 ], 00:18:48.437 "product_name": "Logical Volume", 00:18:48.437 "block_size": 4096, 00:18:48.437 "num_blocks": 26476544, 00:18:48.437 "uuid": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:48.437 "assigned_rate_limits": { 00:18:48.437 "rw_ios_per_sec": 0, 00:18:48.437 "rw_mbytes_per_sec": 0, 00:18:48.437 "r_mbytes_per_sec": 0, 00:18:48.437 "w_mbytes_per_sec": 0 00:18:48.437 }, 00:18:48.437 "claimed": false, 00:18:48.437 "zoned": false, 00:18:48.437 "supported_io_types": { 00:18:48.437 "read": true, 00:18:48.437 "write": true, 00:18:48.437 "unmap": true, 00:18:48.437 "flush": false, 00:18:48.437 "reset": true, 00:18:48.437 "nvme_admin": false, 00:18:48.437 "nvme_io": false, 00:18:48.437 "nvme_io_md": false, 00:18:48.437 "write_zeroes": true, 00:18:48.437 "zcopy": false, 00:18:48.437 "get_zone_info": false, 00:18:48.437 "zone_management": false, 00:18:48.437 "zone_append": false, 00:18:48.437 "compare": false, 00:18:48.437 "compare_and_write": false, 00:18:48.437 "abort": false, 00:18:48.437 "seek_hole": true, 00:18:48.437 "seek_data": true, 00:18:48.437 "copy": false, 00:18:48.437 "nvme_iov_md": false 00:18:48.437 }, 00:18:48.437 "driver_specific": { 00:18:48.437 "lvol": { 00:18:48.437 "lvol_store_uuid": "819eefc8-cac8-4056-85d7-14956f4028a8", 00:18:48.437 "base_bdev": "nvme0n1", 00:18:48.437 "thin_provision": true, 00:18:48.437 "num_allocated_clusters": 0, 00:18:48.437 "snapshot": false, 00:18:48.437 "clone": false, 00:18:48.437 "esnap_clone": false 00:18:48.437 } 00:18:48.437 } 00:18:48.437 } 00:18:48.437 ]' 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bs=4096 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # nb=26476544 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # echo 103424 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:48.437 05:08:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bdev_name=80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_info 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bs 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local nb 00:18:48.696 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:18:48.953 { 00:18:48.953 "name": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:48.953 "aliases": [ 00:18:48.953 "lvs/nvme0n1p0" 00:18:48.953 ], 00:18:48.953 "product_name": "Logical Volume", 00:18:48.953 "block_size": 4096, 00:18:48.953 "num_blocks": 26476544, 00:18:48.953 "uuid": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:48.953 "assigned_rate_limits": { 00:18:48.953 "rw_ios_per_sec": 0, 00:18:48.953 "rw_mbytes_per_sec": 0, 00:18:48.953 "r_mbytes_per_sec": 0, 00:18:48.953 "w_mbytes_per_sec": 0 00:18:48.953 }, 00:18:48.953 "claimed": false, 00:18:48.953 "zoned": false, 00:18:48.953 "supported_io_types": { 00:18:48.953 "read": true, 00:18:48.953 "write": true, 00:18:48.953 "unmap": true, 00:18:48.953 "flush": false, 00:18:48.953 "reset": true, 00:18:48.953 "nvme_admin": false, 00:18:48.953 "nvme_io": false, 00:18:48.953 "nvme_io_md": false, 00:18:48.953 "write_zeroes": true, 00:18:48.953 "zcopy": false, 00:18:48.953 "get_zone_info": false, 00:18:48.953 "zone_management": false, 00:18:48.953 "zone_append": false, 00:18:48.953 "compare": false, 00:18:48.953 "compare_and_write": false, 00:18:48.953 "abort": false, 00:18:48.953 "seek_hole": true, 00:18:48.953 "seek_data": true, 00:18:48.953 "copy": false, 00:18:48.953 "nvme_iov_md": false 00:18:48.953 }, 00:18:48.953 "driver_specific": { 00:18:48.953 "lvol": { 00:18:48.953 "lvol_store_uuid": "819eefc8-cac8-4056-85d7-14956f4028a8", 00:18:48.953 "base_bdev": "nvme0n1", 00:18:48.953 "thin_provision": true, 00:18:48.953 "num_allocated_clusters": 0, 00:18:48.953 "snapshot": false, 00:18:48.953 "clone": false, 00:18:48.953 "esnap_clone": false 00:18:48.953 } 00:18:48.953 } 00:18:48.953 } 00:18:48.953 ]' 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bs=4096 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # nb=26476544 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # echo 103424 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:48.953 05:08:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bdev_name=80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_info 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bs 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local nb 00:18:49.211 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80ed5491-8ed5-48cb-9b8b-5011e95e2482 00:18:49.469 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:18:49.469 { 00:18:49.469 "name": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:49.469 "aliases": [ 00:18:49.469 "lvs/nvme0n1p0" 00:18:49.469 ], 00:18:49.469 "product_name": "Logical Volume", 00:18:49.469 "block_size": 4096, 00:18:49.469 "num_blocks": 26476544, 00:18:49.469 "uuid": "80ed5491-8ed5-48cb-9b8b-5011e95e2482", 00:18:49.469 "assigned_rate_limits": { 00:18:49.469 "rw_ios_per_sec": 0, 00:18:49.469 "rw_mbytes_per_sec": 0, 00:18:49.469 "r_mbytes_per_sec": 0, 00:18:49.469 "w_mbytes_per_sec": 0 00:18:49.469 }, 00:18:49.469 "claimed": false, 00:18:49.469 "zoned": false, 00:18:49.469 "supported_io_types": { 00:18:49.469 "read": true, 00:18:49.469 "write": true, 00:18:49.469 "unmap": true, 00:18:49.469 "flush": false, 00:18:49.469 "reset": true, 00:18:49.469 "nvme_admin": false, 00:18:49.469 "nvme_io": false, 00:18:49.469 "nvme_io_md": false, 00:18:49.469 "write_zeroes": true, 00:18:49.469 "zcopy": false, 00:18:49.469 "get_zone_info": false, 00:18:49.469 "zone_management": false, 00:18:49.469 "zone_append": false, 00:18:49.469 "compare": false, 00:18:49.469 "compare_and_write": false, 00:18:49.469 "abort": false, 00:18:49.469 "seek_hole": true, 00:18:49.469 "seek_data": true, 00:18:49.469 "copy": false, 00:18:49.469 "nvme_iov_md": false 00:18:49.469 }, 00:18:49.469 "driver_specific": { 00:18:49.469 "lvol": { 00:18:49.469 "lvol_store_uuid": "819eefc8-cac8-4056-85d7-14956f4028a8", 00:18:49.469 "base_bdev": "nvme0n1", 00:18:49.469 "thin_provision": true, 00:18:49.469 "num_allocated_clusters": 0, 00:18:49.469 "snapshot": false, 00:18:49.469 "clone": false, 00:18:49.469 "esnap_clone": false 00:18:49.469 } 00:18:49.469 } 00:18:49.469 } 00:18:49.469 ]' 00:18:49.469 05:08:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bs=4096 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # nb=26476544 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # echo 103424 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:49.469 05:08:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 80ed5491-8ed5-48cb-9b8b-5011e95e2482 -c nvc0n1p0 --l2p_dram_limit 20 00:18:49.728 [2024-07-24 05:08:04.312502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.312563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:49.728 [2024-07-24 05:08:04.312603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:49.728 [2024-07-24 05:08:04.312615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.312689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.312708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.728 [2024-07-24 05:08:04.312725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:49.728 [2024-07-24 05:08:04.312736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.312763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:49.728 [2024-07-24 05:08:04.313987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:49.728 [2024-07-24 05:08:04.314037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.314052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.728 [2024-07-24 05:08:04.314085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:18:49.728 [2024-07-24 05:08:04.314096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.314248] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7092a4ce-ef22-4ebc-839b-031f075d63e1 00:18:49.728 [2024-07-24 05:08:04.315423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.315469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:49.728 [2024-07-24 05:08:04.315490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:49.728 [2024-07-24 05:08:04.315504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.320591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.320698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.728 [2024-07-24 05:08:04.320719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:18:49.728 [2024-07-24 05:08:04.320735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.320879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.320907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.728 [2024-07-24 05:08:04.320922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:49.728 [2024-07-24 05:08:04.320940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.321029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.321051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:49.728 [2024-07-24 05:08:04.321065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:49.728 [2024-07-24 05:08:04.321080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.321127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.728 [2024-07-24 05:08:04.325734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.325780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.728 [2024-07-24 05:08:04.325816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:18:49.728 [2024-07-24 05:08:04.325829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.325926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.325946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:49.728 [2024-07-24 05:08:04.325962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:49.728 [2024-07-24 05:08:04.325974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.326095] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:49.728 [2024-07-24 05:08:04.326306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:49.728 [2024-07-24 05:08:04.326337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:49.728 [2024-07-24 05:08:04.326355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:49.728 [2024-07-24 05:08:04.326372] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:49.728 [2024-07-24 05:08:04.326387] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:49.728 [2024-07-24 05:08:04.326404] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:49.728 [2024-07-24 05:08:04.326422] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:49.728 [2024-07-24 05:08:04.326436] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:49.728 [2024-07-24 05:08:04.326447] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:49.728 [2024-07-24 05:08:04.326461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.728 [2024-07-24 05:08:04.326474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:49.728 [2024-07-24 05:08:04.326492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:18:49.728 [2024-07-24 05:08:04.326504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.728 [2024-07-24 05:08:04.326610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.729 [2024-07-24 05:08:04.326640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:49.729 [2024-07-24 05:08:04.326655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:49.729 [2024-07-24 05:08:04.326666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.729 [2024-07-24 05:08:04.326797] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:49.729 [2024-07-24 05:08:04.326813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:49.729 [2024-07-24 05:08:04.326828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.729 [2024-07-24 05:08:04.326843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.326857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:49.729 [2024-07-24 05:08:04.326868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.326881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:49.729 [2024-07-24 05:08:04.326892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:49.729 [2024-07-24 05:08:04.326906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:49.729 [2024-07-24 05:08:04.326917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.729 [2024-07-24 05:08:04.326930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:49.729 [2024-07-24 05:08:04.326941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:49.729 [2024-07-24 05:08:04.326955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.729 [2024-07-24 05:08:04.326967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:49.729 [2024-07-24 05:08:04.326980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:49.729 [2024-07-24 05:08:04.327034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:49.729 [2024-07-24 05:08:04.327062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:49.729 [2024-07-24 05:08:04.327114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:49.729 [2024-07-24 05:08:04.327148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:49.729 [2024-07-24 05:08:04.327183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:49.729 [2024-07-24 05:08:04.327217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:49.729 [2024-07-24 05:08:04.327316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.729 [2024-07-24 05:08:04.327340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:49.729 [2024-07-24 05:08:04.327351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:49.729 [2024-07-24 05:08:04.327367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.729 [2024-07-24 05:08:04.327378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:49.729 [2024-07-24 05:08:04.327391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:49.729 [2024-07-24 05:08:04.327403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:49.729 [2024-07-24 05:08:04.327427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:49.729 [2024-07-24 05:08:04.327439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327450] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:49.729 [2024-07-24 05:08:04.327465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:49.729 [2024-07-24 05:08:04.327476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.729 [2024-07-24 05:08:04.327502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:49.729 [2024-07-24 05:08:04.327533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:49.729 [2024-07-24 05:08:04.327544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:49.729 [2024-07-24 05:08:04.327557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:49.729 [2024-07-24 05:08:04.327567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:49.729 [2024-07-24 05:08:04.327592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:49.729 [2024-07-24 05:08:04.327622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:49.729 [2024-07-24 05:08:04.327652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:49.729 [2024-07-24 05:08:04.327694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:49.729 [2024-07-24 05:08:04.327705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:49.729 [2024-07-24 05:08:04.327718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:49.729 [2024-07-24 05:08:04.327729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:49.729 [2024-07-24 05:08:04.327742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:49.729 [2024-07-24 05:08:04.327752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:49.729 [2024-07-24 05:08:04.327767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:49.729 [2024-07-24 05:08:04.327778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:49.729 [2024-07-24 05:08:04.327792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:49.729 [2024-07-24 05:08:04.327868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:49.729 [2024-07-24 05:08:04.327883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:49.729 [2024-07-24 05:08:04.327910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:49.729 [2024-07-24 05:08:04.327937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:49.729 [2024-07-24 05:08:04.327951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:49.729 [2024-07-24 05:08:04.327964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.729 [2024-07-24 05:08:04.328008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:49.729 [2024-07-24 05:08:04.328022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:18:49.729 [2024-07-24 05:08:04.328036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.729 [2024-07-24 05:08:04.328082] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:49.729 [2024-07-24 05:08:04.328102] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:52.281 [2024-07-24 05:08:06.379429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.379529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:52.281 [2024-07-24 05:08:06.379583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2051.360 ms 00:18:52.281 [2024-07-24 05:08:06.379611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.424289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.424366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.281 [2024-07-24 05:08:06.424387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.459 ms 00:18:52.281 [2024-07-24 05:08:06.424400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.424569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.424593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:52.281 [2024-07-24 05:08:06.424605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:52.281 [2024-07-24 05:08:06.424619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.458118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.458447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.281 [2024-07-24 05:08:06.458584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.432 ms 00:18:52.281 [2024-07-24 05:08:06.458612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.458667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.458685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.281 [2024-07-24 05:08:06.458699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:52.281 [2024-07-24 05:08:06.458712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.459195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.459233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.281 [2024-07-24 05:08:06.459274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:52.281 [2024-07-24 05:08:06.459288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.459426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.459446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.281 [2024-07-24 05:08:06.459461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:52.281 [2024-07-24 05:08:06.459476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.473702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.473760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.281 [2024-07-24 05:08:06.473776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.205 ms 00:18:52.281 [2024-07-24 05:08:06.473789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.485648] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:52.281 [2024-07-24 05:08:06.490645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.490831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:52.281 [2024-07-24 05:08:06.490894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.725 ms 00:18:52.281 [2024-07-24 05:08:06.490908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.548164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.548448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:52.281 [2024-07-24 05:08:06.548583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.214 ms 00:18:52.281 [2024-07-24 05:08:06.548740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.549052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.549223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:52.281 [2024-07-24 05:08:06.549357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:18:52.281 [2024-07-24 05:08:06.549410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.576743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.281 [2024-07-24 05:08:06.576969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:52.281 [2024-07-24 05:08:06.577109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.159 ms 00:18:52.281 [2024-07-24 05:08:06.577161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.281 [2024-07-24 05:08:06.605160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.605350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:52.282 [2024-07-24 05:08:06.605499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.806 ms 00:18:52.282 [2024-07-24 05:08:06.605609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.606342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.606507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:52.282 [2024-07-24 05:08:06.606637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:18:52.282 [2024-07-24 05:08:06.606740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.687798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.688111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:52.282 [2024-07-24 05:08:06.688260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.919 ms 00:18:52.282 [2024-07-24 05:08:06.688311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.718365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.718577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:52.282 [2024-07-24 05:08:06.718728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.856 ms 00:18:52.282 [2024-07-24 05:08:06.718785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.747405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.747636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:52.282 [2024-07-24 05:08:06.747785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.442 ms 00:18:52.282 [2024-07-24 05:08:06.747835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.776471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.776510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:52.282 [2024-07-24 05:08:06.776546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.463 ms 00:18:52.282 [2024-07-24 05:08:06.776557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.776610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.776626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:52.282 [2024-07-24 05:08:06.776643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:52.282 [2024-07-24 05:08:06.776655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.776775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.282 [2024-07-24 05:08:06.776793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:52.282 [2024-07-24 05:08:06.776807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:52.282 [2024-07-24 05:08:06.776821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.282 [2024-07-24 05:08:06.778046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2464.993 ms, result 0 00:18:52.282 { 00:18:52.282 "name": "ftl0", 00:18:52.282 "uuid": "7092a4ce-ef22-4ebc-839b-031f075d63e1" 00:18:52.282 } 00:18:52.282 05:08:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:52.282 05:08:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:52.282 05:08:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:52.540 05:08:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:52.798 [2024-07-24 05:08:07.170487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:52.798 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:52.798 Zero copy mechanism will not be used. 00:18:52.798 Running I/O for 4 seconds... 00:18:56.996 00:18:56.996 Latency(us) 00:18:56.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.996 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:56.996 ftl0 : 4.00 1694.96 112.56 0.00 0.00 617.60 245.76 1117.09 00:18:56.996 =================================================================================================================== 00:18:56.996 Total : 1694.96 112.56 0.00 0.00 617.60 245.76 1117.09 00:18:56.996 0 00:18:56.996 [2024-07-24 05:08:11.180176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:56.996 05:08:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:56.996 [2024-07-24 05:08:11.313396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:56.996 Running I/O for 4 seconds... 00:19:01.201 00:19:01.201 Latency(us) 00:19:01.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.201 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:01.201 ftl0 : 4.02 7420.78 28.99 0.00 0.00 17204.29 342.57 35270.28 00:19:01.201 =================================================================================================================== 00:19:01.201 Total : 7420.78 28.99 0.00 0.00 17204.29 0.00 35270.28 00:19:01.201 [2024-07-24 05:08:15.340373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:01.201 0 00:19:01.201 05:08:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:01.201 [2024-07-24 05:08:15.473460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:01.201 Running I/O for 4 seconds... 00:19:05.394 00:19:05.394 Latency(us) 00:19:05.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.394 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:05.394 Verification LBA range: start 0x0 length 0x1400000 00:19:05.394 ftl0 : 4.01 5167.42 20.19 0.00 0.00 24675.96 387.26 39559.91 00:19:05.394 =================================================================================================================== 00:19:05.394 Total : 5167.42 20.19 0.00 0.00 24675.96 0.00 39559.91 00:19:05.394 0 00:19:05.394 [2024-07-24 05:08:19.502907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:05.394 05:08:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:05.394 [2024-07-24 05:08:19.750466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.750523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:05.394 [2024-07-24 05:08:19.750561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.394 [2024-07-24 05:08:19.750575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.750607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.394 [2024-07-24 05:08:19.753738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.753777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:05.394 [2024-07-24 05:08:19.753808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:19:05.394 [2024-07-24 05:08:19.753823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.755561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.755668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:05.394 [2024-07-24 05:08:19.755685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:19:05.394 [2024-07-24 05:08:19.755698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.928499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.928587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:05.394 [2024-07-24 05:08:19.928609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 172.777 ms 00:19:05.394 [2024-07-24 05:08:19.928626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.934400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.934463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:05.394 [2024-07-24 05:08:19.934480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.727 ms 00:19:05.394 [2024-07-24 05:08:19.934505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.960498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.960559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:05.394 [2024-07-24 05:08:19.960577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:19:05.394 [2024-07-24 05:08:19.960590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.977648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.977712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:05.394 [2024-07-24 05:08:19.977735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.013 ms 00:19:05.394 [2024-07-24 05:08:19.977749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:19.978049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:19.978077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:05.394 [2024-07-24 05:08:19.978091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:19:05.394 [2024-07-24 05:08:19.978107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.394 [2024-07-24 05:08:20.006704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.394 [2024-07-24 05:08:20.006770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:05.394 [2024-07-24 05:08:20.006789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.574 ms 00:19:05.394 [2024-07-24 05:08:20.006802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.655 [2024-07-24 05:08:20.037725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.655 [2024-07-24 05:08:20.037806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:05.655 [2024-07-24 05:08:20.037826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.843 ms 00:19:05.655 [2024-07-24 05:08:20.037840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.655 [2024-07-24 05:08:20.064544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.655 [2024-07-24 05:08:20.064605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:05.655 [2024-07-24 05:08:20.064623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.628 ms 00:19:05.655 [2024-07-24 05:08:20.064636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.655 [2024-07-24 05:08:20.091208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.655 [2024-07-24 05:08:20.091311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:05.655 [2024-07-24 05:08:20.091331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:19:05.655 [2024-07-24 05:08:20.091347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.655 [2024-07-24 05:08:20.091391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:05.655 [2024-07-24 05:08:20.091417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:05.655 [2024-07-24 05:08:20.091886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.091934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.091950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.091963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.091975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.091987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:05.656 [2024-07-24 05:08:20.092871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:05.656 [2024-07-24 05:08:20.092883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7092a4ce-ef22-4ebc-839b-031f075d63e1 00:19:05.656 [2024-07-24 05:08:20.092907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:05.656 [2024-07-24 05:08:20.092920] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:05.656 [2024-07-24 05:08:20.092934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:05.656 [2024-07-24 05:08:20.092948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:05.656 [2024-07-24 05:08:20.092960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:05.656 [2024-07-24 05:08:20.092972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:05.656 [2024-07-24 05:08:20.092985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:05.656 [2024-07-24 05:08:20.092995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:05.656 [2024-07-24 05:08:20.093009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:05.656 [2024-07-24 05:08:20.093020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.656 [2024-07-24 05:08:20.093033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:05.656 [2024-07-24 05:08:20.093046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:19:05.656 [2024-07-24 05:08:20.093074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.656 [2024-07-24 05:08:20.108560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.656 [2024-07-24 05:08:20.108621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:05.657 [2024-07-24 05:08:20.108637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.427 ms 00:19:05.657 [2024-07-24 05:08:20.108650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.109131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.657 [2024-07-24 05:08:20.109165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:05.657 [2024-07-24 05:08:20.109187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:19:05.657 [2024-07-24 05:08:20.109201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.148159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.657 [2024-07-24 05:08:20.148229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.657 [2024-07-24 05:08:20.148277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.657 [2024-07-24 05:08:20.148294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.148362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.657 [2024-07-24 05:08:20.148380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.657 [2024-07-24 05:08:20.148392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.657 [2024-07-24 05:08:20.148405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.148521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.657 [2024-07-24 05:08:20.148549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.657 [2024-07-24 05:08:20.148562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.657 [2024-07-24 05:08:20.148575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.148596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.657 [2024-07-24 05:08:20.148612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.657 [2024-07-24 05:08:20.148623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.657 [2024-07-24 05:08:20.148636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.657 [2024-07-24 05:08:20.244352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.657 [2024-07-24 05:08:20.244422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.657 [2024-07-24 05:08:20.244442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.657 [2024-07-24 05:08:20.244459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.323949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.917 [2024-07-24 05:08:20.324030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.917 [2024-07-24 05:08:20.324244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.917 [2024-07-24 05:08:20.324355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.917 [2024-07-24 05:08:20.324540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:05.917 [2024-07-24 05:08:20.324648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.917 [2024-07-24 05:08:20.324738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.324821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.917 [2024-07-24 05:08:20.324840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.917 [2024-07-24 05:08:20.324853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.917 [2024-07-24 05:08:20.324866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.917 [2024-07-24 05:08:20.325041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.527 ms, result 0 00:19:05.917 true 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79651 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 79651 ']' 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 79651 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79651 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79651' 00:19:05.917 killing process with pid 79651 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 79651 00:19:05.917 Received shutdown signal, test time was about 4.000000 seconds 00:19:05.917 00:19:05.917 Latency(us) 00:19:05.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.917 =================================================================================================================== 00:19:05.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.917 05:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 79651 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.128 Remove shared memory files 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:10.128 ************************************ 00:19:10.128 END TEST ftl_bdevperf 00:19:10.128 ************************************ 00:19:10.128 00:19:10.128 real 0m24.161s 00:19:10.128 user 0m27.455s 00:19:10.128 sys 0m1.069s 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.128 05:08:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.128 05:08:23 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:10.128 05:08:23 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:10.128 05:08:23 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.128 05:08:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:10.128 ************************************ 00:19:10.128 START TEST ftl_trim 00:19:10.128 ************************************ 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:10.128 * Looking for test storage... 00:19:10.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80000 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80000 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80000 ']' 00:19:10.128 05:08:24 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.128 05:08:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:10.128 [2024-07-24 05:08:24.236705] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:19:10.129 [2024-07-24 05:08:24.236928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80000 ] 00:19:10.129 [2024-07-24 05:08:24.407988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:10.129 [2024-07-24 05:08:24.586629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.129 [2024-07-24 05:08:24.586736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.129 [2024-07-24 05:08:24.586744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.708 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.708 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:10.708 05:08:25 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:10.968 05:08:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:10.968 05:08:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:11.227 05:08:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:11.227 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:19:11.227 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_info 00:19:11.227 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bs 00:19:11.227 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local nb 00:19:11.227 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:19:11.486 { 00:19:11.486 "name": "nvme0n1", 00:19:11.486 "aliases": [ 00:19:11.486 "99ce2257-562a-4107-b21e-910b9a4003d2" 00:19:11.486 ], 00:19:11.486 "product_name": "NVMe disk", 00:19:11.486 "block_size": 4096, 00:19:11.486 "num_blocks": 1310720, 00:19:11.486 "uuid": "99ce2257-562a-4107-b21e-910b9a4003d2", 00:19:11.486 "assigned_rate_limits": { 00:19:11.486 "rw_ios_per_sec": 0, 00:19:11.486 "rw_mbytes_per_sec": 0, 00:19:11.486 "r_mbytes_per_sec": 0, 00:19:11.486 "w_mbytes_per_sec": 0 00:19:11.486 }, 00:19:11.486 "claimed": true, 00:19:11.486 "claim_type": "read_many_write_one", 00:19:11.486 "zoned": false, 00:19:11.486 "supported_io_types": { 00:19:11.486 "read": true, 00:19:11.486 "write": true, 00:19:11.486 "unmap": true, 00:19:11.486 "flush": true, 00:19:11.486 "reset": true, 00:19:11.486 "nvme_admin": true, 00:19:11.486 "nvme_io": true, 00:19:11.486 "nvme_io_md": false, 00:19:11.486 "write_zeroes": true, 00:19:11.486 "zcopy": false, 00:19:11.486 "get_zone_info": false, 00:19:11.486 "zone_management": false, 00:19:11.486 "zone_append": false, 00:19:11.486 "compare": true, 00:19:11.486 "compare_and_write": false, 00:19:11.486 "abort": true, 00:19:11.486 "seek_hole": false, 00:19:11.486 "seek_data": false, 00:19:11.486 "copy": true, 00:19:11.486 "nvme_iov_md": false 00:19:11.486 }, 00:19:11.486 "driver_specific": { 00:19:11.486 "nvme": [ 00:19:11.486 { 00:19:11.486 "pci_address": "0000:00:11.0", 00:19:11.486 "trid": { 00:19:11.486 "trtype": "PCIe", 00:19:11.486 "traddr": "0000:00:11.0" 00:19:11.486 }, 00:19:11.486 "ctrlr_data": { 00:19:11.486 "cntlid": 0, 00:19:11.486 "vendor_id": "0x1b36", 00:19:11.486 "model_number": "QEMU NVMe Ctrl", 00:19:11.486 "serial_number": "12341", 00:19:11.486 "firmware_revision": "8.0.0", 00:19:11.486 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:11.486 "oacs": { 00:19:11.486 "security": 0, 00:19:11.486 "format": 1, 00:19:11.486 "firmware": 0, 00:19:11.486 "ns_manage": 1 00:19:11.486 }, 00:19:11.486 "multi_ctrlr": false, 00:19:11.486 "ana_reporting": false 00:19:11.486 }, 00:19:11.486 "vs": { 00:19:11.486 "nvme_version": "1.4" 00:19:11.486 }, 00:19:11.486 "ns_data": { 00:19:11.486 "id": 1, 00:19:11.486 "can_share": false 00:19:11.486 } 00:19:11.486 } 00:19:11.486 ], 00:19:11.486 "mp_policy": "active_passive" 00:19:11.486 } 00:19:11.486 } 00:19:11.486 ]' 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bs=4096 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # nb=1310720 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:19:11.486 05:08:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # echo 5120 00:19:11.486 05:08:25 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:11.486 05:08:25 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:11.486 05:08:25 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:11.486 05:08:25 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:11.486 05:08:25 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:11.746 05:08:26 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=819eefc8-cac8-4056-85d7-14956f4028a8 00:19:11.746 05:08:26 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:11.746 05:08:26 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 819eefc8-cac8-4056-85d7-14956f4028a8 00:19:12.005 05:08:26 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:12.264 05:08:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5da2d41f-f572-4282-9b95-f107de292170 00:19:12.264 05:08:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5da2d41f-f572-4282-9b95-f107de292170 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:12.523 05:08:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.523 05:08:26 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bdev_name=ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.523 05:08:26 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_info 00:19:12.523 05:08:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bs 00:19:12.523 05:08:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local nb 00:19:12.523 05:08:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:12.781 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:19:12.781 { 00:19:12.781 "name": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:12.781 "aliases": [ 00:19:12.781 "lvs/nvme0n1p0" 00:19:12.781 ], 00:19:12.781 "product_name": "Logical Volume", 00:19:12.781 "block_size": 4096, 00:19:12.781 "num_blocks": 26476544, 00:19:12.781 "uuid": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:12.781 "assigned_rate_limits": { 00:19:12.781 "rw_ios_per_sec": 0, 00:19:12.781 "rw_mbytes_per_sec": 0, 00:19:12.781 "r_mbytes_per_sec": 0, 00:19:12.781 "w_mbytes_per_sec": 0 00:19:12.781 }, 00:19:12.781 "claimed": false, 00:19:12.781 "zoned": false, 00:19:12.781 "supported_io_types": { 00:19:12.781 "read": true, 00:19:12.781 "write": true, 00:19:12.781 "unmap": true, 00:19:12.781 "flush": false, 00:19:12.781 "reset": true, 00:19:12.781 "nvme_admin": false, 00:19:12.781 "nvme_io": false, 00:19:12.781 "nvme_io_md": false, 00:19:12.781 "write_zeroes": true, 00:19:12.781 "zcopy": false, 00:19:12.781 "get_zone_info": false, 00:19:12.781 "zone_management": false, 00:19:12.781 "zone_append": false, 00:19:12.781 "compare": false, 00:19:12.781 "compare_and_write": false, 00:19:12.781 "abort": false, 00:19:12.781 "seek_hole": true, 00:19:12.781 "seek_data": true, 00:19:12.781 "copy": false, 00:19:12.781 "nvme_iov_md": false 00:19:12.781 }, 00:19:12.781 "driver_specific": { 00:19:12.781 "lvol": { 00:19:12.781 "lvol_store_uuid": "5da2d41f-f572-4282-9b95-f107de292170", 00:19:12.781 "base_bdev": "nvme0n1", 00:19:12.781 "thin_provision": true, 00:19:12.781 "num_allocated_clusters": 0, 00:19:12.781 "snapshot": false, 00:19:12.781 "clone": false, 00:19:12.782 "esnap_clone": false 00:19:12.782 } 00:19:12.782 } 00:19:12.782 } 00:19:12.782 ]' 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bs=4096 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # nb=26476544 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:19:12.782 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # echo 103424 00:19:12.782 05:08:27 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:12.782 05:08:27 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:12.782 05:08:27 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:13.040 05:08:27 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:13.040 05:08:27 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:13.040 05:08:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:13.040 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bdev_name=ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:13.040 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_info 00:19:13.040 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bs 00:19:13.040 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local nb 00:19:13.040 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:13.299 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:19:13.299 { 00:19:13.299 "name": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:13.299 "aliases": [ 00:19:13.299 "lvs/nvme0n1p0" 00:19:13.299 ], 00:19:13.299 "product_name": "Logical Volume", 00:19:13.299 "block_size": 4096, 00:19:13.299 "num_blocks": 26476544, 00:19:13.299 "uuid": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:13.299 "assigned_rate_limits": { 00:19:13.299 "rw_ios_per_sec": 0, 00:19:13.299 "rw_mbytes_per_sec": 0, 00:19:13.299 "r_mbytes_per_sec": 0, 00:19:13.299 "w_mbytes_per_sec": 0 00:19:13.299 }, 00:19:13.299 "claimed": false, 00:19:13.299 "zoned": false, 00:19:13.299 "supported_io_types": { 00:19:13.299 "read": true, 00:19:13.299 "write": true, 00:19:13.299 "unmap": true, 00:19:13.299 "flush": false, 00:19:13.299 "reset": true, 00:19:13.299 "nvme_admin": false, 00:19:13.299 "nvme_io": false, 00:19:13.299 "nvme_io_md": false, 00:19:13.299 "write_zeroes": true, 00:19:13.299 "zcopy": false, 00:19:13.299 "get_zone_info": false, 00:19:13.299 "zone_management": false, 00:19:13.300 "zone_append": false, 00:19:13.300 "compare": false, 00:19:13.300 "compare_and_write": false, 00:19:13.300 "abort": false, 00:19:13.300 "seek_hole": true, 00:19:13.300 "seek_data": true, 00:19:13.300 "copy": false, 00:19:13.300 "nvme_iov_md": false 00:19:13.300 }, 00:19:13.300 "driver_specific": { 00:19:13.300 "lvol": { 00:19:13.300 "lvol_store_uuid": "5da2d41f-f572-4282-9b95-f107de292170", 00:19:13.300 "base_bdev": "nvme0n1", 00:19:13.300 "thin_provision": true, 00:19:13.300 "num_allocated_clusters": 0, 00:19:13.300 "snapshot": false, 00:19:13.300 "clone": false, 00:19:13.300 "esnap_clone": false 00:19:13.300 } 00:19:13.300 } 00:19:13.300 } 00:19:13.300 ]' 00:19:13.300 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:19:13.300 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bs=4096 00:19:13.300 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:19:13.559 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # nb=26476544 00:19:13.559 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:19:13.559 05:08:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # echo 103424 00:19:13.559 05:08:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:13.559 05:08:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:13.817 05:08:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:13.817 05:08:28 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:13.817 05:08:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:13.817 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bdev_name=ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:13.817 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_info 00:19:13.817 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bs 00:19:13.817 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local nb 00:19:13.817 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae850742-3021-4c2a-ac92-146a1a087ae3 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:19:14.076 { 00:19:14.076 "name": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:14.076 "aliases": [ 00:19:14.076 "lvs/nvme0n1p0" 00:19:14.076 ], 00:19:14.076 "product_name": "Logical Volume", 00:19:14.076 "block_size": 4096, 00:19:14.076 "num_blocks": 26476544, 00:19:14.076 "uuid": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:14.076 "assigned_rate_limits": { 00:19:14.076 "rw_ios_per_sec": 0, 00:19:14.076 "rw_mbytes_per_sec": 0, 00:19:14.076 "r_mbytes_per_sec": 0, 00:19:14.076 "w_mbytes_per_sec": 0 00:19:14.076 }, 00:19:14.076 "claimed": false, 00:19:14.076 "zoned": false, 00:19:14.076 "supported_io_types": { 00:19:14.076 "read": true, 00:19:14.076 "write": true, 00:19:14.076 "unmap": true, 00:19:14.076 "flush": false, 00:19:14.076 "reset": true, 00:19:14.076 "nvme_admin": false, 00:19:14.076 "nvme_io": false, 00:19:14.076 "nvme_io_md": false, 00:19:14.076 "write_zeroes": true, 00:19:14.076 "zcopy": false, 00:19:14.076 "get_zone_info": false, 00:19:14.076 "zone_management": false, 00:19:14.076 "zone_append": false, 00:19:14.076 "compare": false, 00:19:14.076 "compare_and_write": false, 00:19:14.076 "abort": false, 00:19:14.076 "seek_hole": true, 00:19:14.076 "seek_data": true, 00:19:14.076 "copy": false, 00:19:14.076 "nvme_iov_md": false 00:19:14.076 }, 00:19:14.076 "driver_specific": { 00:19:14.076 "lvol": { 00:19:14.076 "lvol_store_uuid": "5da2d41f-f572-4282-9b95-f107de292170", 00:19:14.076 "base_bdev": "nvme0n1", 00:19:14.076 "thin_provision": true, 00:19:14.076 "num_allocated_clusters": 0, 00:19:14.076 "snapshot": false, 00:19:14.076 "clone": false, 00:19:14.076 "esnap_clone": false 00:19:14.076 } 00:19:14.076 } 00:19:14.076 } 00:19:14.076 ]' 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bs=4096 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # nb=26476544 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:19:14.076 05:08:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # echo 103424 00:19:14.076 05:08:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:14.076 05:08:28 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ae850742-3021-4c2a-ac92-146a1a087ae3 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:14.336 [2024-07-24 05:08:28.824654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.824752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.336 [2024-07-24 05:08:28.824790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:14.336 [2024-07-24 05:08:28.824807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.828576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.828624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.336 [2024-07-24 05:08:28.828658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:19:14.336 [2024-07-24 05:08:28.828671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.828869] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.336 [2024-07-24 05:08:28.829962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.336 [2024-07-24 05:08:28.830008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.830031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.336 [2024-07-24 05:08:28.830045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:19:14.336 [2024-07-24 05:08:28.830060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.830319] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:14.336 [2024-07-24 05:08:28.831612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.831668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:14.336 [2024-07-24 05:08:28.831704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:14.336 [2024-07-24 05:08:28.831716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.836726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.836776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.336 [2024-07-24 05:08:28.836812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:19:14.336 [2024-07-24 05:08:28.836824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.837086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.837134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.336 [2024-07-24 05:08:28.837161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:14.336 [2024-07-24 05:08:28.837181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.837258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.837292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.336 [2024-07-24 05:08:28.837324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:14.336 [2024-07-24 05:08:28.837349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.837421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:14.336 [2024-07-24 05:08:28.842317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.842398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.336 [2024-07-24 05:08:28.842416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:19:14.336 [2024-07-24 05:08:28.842430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.842526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.336 [2024-07-24 05:08:28.842550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.336 [2024-07-24 05:08:28.842563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:14.336 [2024-07-24 05:08:28.842576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.336 [2024-07-24 05:08:28.842613] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:14.337 [2024-07-24 05:08:28.842784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.337 [2024-07-24 05:08:28.842803] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.337 [2024-07-24 05:08:28.842823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:14.337 [2024-07-24 05:08:28.842838] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.337 [2024-07-24 05:08:28.842864] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.337 [2024-07-24 05:08:28.842941] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:14.337 [2024-07-24 05:08:28.842971] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.337 [2024-07-24 05:08:28.842993] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.337 [2024-07-24 05:08:28.843096] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.337 [2024-07-24 05:08:28.843121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.337 [2024-07-24 05:08:28.843138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.337 [2024-07-24 05:08:28.843161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:19:14.337 [2024-07-24 05:08:28.843189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.337 [2024-07-24 05:08:28.843373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.337 [2024-07-24 05:08:28.843411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.337 [2024-07-24 05:08:28.843438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:14.337 [2024-07-24 05:08:28.843471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.337 [2024-07-24 05:08:28.843652] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.337 [2024-07-24 05:08:28.843701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.337 [2024-07-24 05:08:28.843730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.337 [2024-07-24 05:08:28.843759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.843784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.337 [2024-07-24 05:08:28.843810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.843833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:14.337 [2024-07-24 05:08:28.843887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.337 [2024-07-24 05:08:28.843912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:14.337 [2024-07-24 05:08:28.843940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.337 [2024-07-24 05:08:28.843962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.337 [2024-07-24 05:08:28.843986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:14.337 [2024-07-24 05:08:28.844008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.337 [2024-07-24 05:08:28.844039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.337 [2024-07-24 05:08:28.844064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:14.337 [2024-07-24 05:08:28.844091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.337 [2024-07-24 05:08:28.844143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.337 [2024-07-24 05:08:28.844211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.337 [2024-07-24 05:08:28.844285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.337 [2024-07-24 05:08:28.844357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.337 [2024-07-24 05:08:28.844430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.337 [2024-07-24 05:08:28.844498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.337 [2024-07-24 05:08:28.844550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.337 [2024-07-24 05:08:28.844591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:14.337 [2024-07-24 05:08:28.844612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.337 [2024-07-24 05:08:28.844641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.337 [2024-07-24 05:08:28.844663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:14.337 [2024-07-24 05:08:28.844706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.337 [2024-07-24 05:08:28.844755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:14.337 [2024-07-24 05:08:28.844777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844803] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.337 [2024-07-24 05:08:28.844828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.337 [2024-07-24 05:08:28.844854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.337 [2024-07-24 05:08:28.844897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.337 [2024-07-24 05:08:28.844934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.337 [2024-07-24 05:08:28.844957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.337 [2024-07-24 05:08:28.844987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.337 [2024-07-24 05:08:28.845011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.337 [2024-07-24 05:08:28.845037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.337 [2024-07-24 05:08:28.845062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.337 [2024-07-24 05:08:28.845109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.337 [2024-07-24 05:08:28.845137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:14.337 [2024-07-24 05:08:28.845190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:14.337 [2024-07-24 05:08:28.845216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:14.337 [2024-07-24 05:08:28.845239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:14.337 [2024-07-24 05:08:28.845265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:14.337 [2024-07-24 05:08:28.845288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:14.337 [2024-07-24 05:08:28.845315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:14.337 [2024-07-24 05:08:28.845338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:14.337 [2024-07-24 05:08:28.845367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:14.337 [2024-07-24 05:08:28.845392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:14.337 [2024-07-24 05:08:28.845519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.337 [2024-07-24 05:08:28.845544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.337 [2024-07-24 05:08:28.845573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.338 [2024-07-24 05:08:28.845598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.338 [2024-07-24 05:08:28.845624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.338 [2024-07-24 05:08:28.845647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.338 [2024-07-24 05:08:28.845677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.338 [2024-07-24 05:08:28.845701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.338 [2024-07-24 05:08:28.845727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:19:14.338 [2024-07-24 05:08:28.845749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.338 [2024-07-24 05:08:28.845907] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:14.338 [2024-07-24 05:08:28.845942] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:16.238 [2024-07-24 05:08:30.851366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.238 [2024-07-24 05:08:30.851438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:16.238 [2024-07-24 05:08:30.851479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2005.468 ms 00:19:16.238 [2024-07-24 05:08:30.851492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.497 [2024-07-24 05:08:30.882079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.497 [2024-07-24 05:08:30.882140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.497 [2024-07-24 05:08:30.882180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.232 ms 00:19:16.497 [2024-07-24 05:08:30.882191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.497 [2024-07-24 05:08:30.882411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.497 [2024-07-24 05:08:30.882430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:16.497 [2024-07-24 05:08:30.882448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:16.497 [2024-07-24 05:08:30.882459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.497 [2024-07-24 05:08:30.934525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.497 [2024-07-24 05:08:30.934611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.497 [2024-07-24 05:08:30.934656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.021 ms 00:19:16.497 [2024-07-24 05:08:30.934675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.497 [2024-07-24 05:08:30.934895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.497 [2024-07-24 05:08:30.934926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.497 [2024-07-24 05:08:30.934950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:16.498 [2024-07-24 05:08:30.934979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:30.935456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:30.935491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.498 [2024-07-24 05:08:30.935515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:19:16.498 [2024-07-24 05:08:30.935532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:30.935780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:30.935802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.498 [2024-07-24 05:08:30.935823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:16.498 [2024-07-24 05:08:30.935860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:30.953416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:30.953473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.498 [2024-07-24 05:08:30.953511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.499 ms 00:19:16.498 [2024-07-24 05:08:30.953523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:30.965637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:16.498 [2024-07-24 05:08:30.979064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:30.979157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:16.498 [2024-07-24 05:08:30.979177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:19:16.498 [2024-07-24 05:08:30.979191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:31.041522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:31.041611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:16.498 [2024-07-24 05:08:31.041633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.172 ms 00:19:16.498 [2024-07-24 05:08:31.041646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:31.042258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:31.042354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:16.498 [2024-07-24 05:08:31.042581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:19:16.498 [2024-07-24 05:08:31.042612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:31.070955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:31.071029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:16.498 [2024-07-24 05:08:31.071047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.290 ms 00:19:16.498 [2024-07-24 05:08:31.071061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:31.100922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:31.101000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:16.498 [2024-07-24 05:08:31.101021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.764 ms 00:19:16.498 [2024-07-24 05:08:31.101049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.498 [2024-07-24 05:08:31.101817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.498 [2024-07-24 05:08:31.101886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:16.498 [2024-07-24 05:08:31.101905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:19:16.498 [2024-07-24 05:08:31.101920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.756 [2024-07-24 05:08:31.196159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.196254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:16.757 [2024-07-24 05:08:31.196276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.196 ms 00:19:16.757 [2024-07-24 05:08:31.196293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.226492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.226581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:16.757 [2024-07-24 05:08:31.226604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.063 ms 00:19:16.757 [2024-07-24 05:08:31.226618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.257377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.257465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:16.757 [2024-07-24 05:08:31.257484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.647 ms 00:19:16.757 [2024-07-24 05:08:31.257498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.290017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.290115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:16.757 [2024-07-24 05:08:31.290136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.403 ms 00:19:16.757 [2024-07-24 05:08:31.290149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.290259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.290285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:16.757 [2024-07-24 05:08:31.290299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:16.757 [2024-07-24 05:08:31.290315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.290407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.757 [2024-07-24 05:08:31.290427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:16.757 [2024-07-24 05:08:31.290440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:16.757 [2024-07-24 05:08:31.290476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.757 [2024-07-24 05:08:31.291750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:16.757 [2024-07-24 05:08:31.296112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2466.563 ms, result 0 00:19:16.757 [2024-07-24 05:08:31.297103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.757 { 00:19:16.757 "name": "ftl0", 00:19:16.757 "uuid": "7530738e-4cfd-417f-b87d-53757612b8c5" 00:19:16.757 } 00:19:16.757 05:08:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:16.757 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:17.016 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:17.274 [ 00:19:17.274 { 00:19:17.274 "name": "ftl0", 00:19:17.274 "aliases": [ 00:19:17.274 "7530738e-4cfd-417f-b87d-53757612b8c5" 00:19:17.274 ], 00:19:17.274 "product_name": "FTL disk", 00:19:17.274 "block_size": 4096, 00:19:17.274 "num_blocks": 23592960, 00:19:17.274 "uuid": "7530738e-4cfd-417f-b87d-53757612b8c5", 00:19:17.274 "assigned_rate_limits": { 00:19:17.274 "rw_ios_per_sec": 0, 00:19:17.274 "rw_mbytes_per_sec": 0, 00:19:17.274 "r_mbytes_per_sec": 0, 00:19:17.274 "w_mbytes_per_sec": 0 00:19:17.274 }, 00:19:17.274 "claimed": false, 00:19:17.274 "zoned": false, 00:19:17.274 "supported_io_types": { 00:19:17.274 "read": true, 00:19:17.274 "write": true, 00:19:17.274 "unmap": true, 00:19:17.274 "flush": true, 00:19:17.274 "reset": false, 00:19:17.274 "nvme_admin": false, 00:19:17.274 "nvme_io": false, 00:19:17.274 "nvme_io_md": false, 00:19:17.274 "write_zeroes": true, 00:19:17.274 "zcopy": false, 00:19:17.274 "get_zone_info": false, 00:19:17.274 "zone_management": false, 00:19:17.274 "zone_append": false, 00:19:17.274 "compare": false, 00:19:17.274 "compare_and_write": false, 00:19:17.274 "abort": false, 00:19:17.274 "seek_hole": false, 00:19:17.274 "seek_data": false, 00:19:17.274 "copy": false, 00:19:17.274 "nvme_iov_md": false 00:19:17.274 }, 00:19:17.274 "driver_specific": { 00:19:17.274 "ftl": { 00:19:17.274 "base_bdev": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:17.274 "cache": "nvc0n1p0" 00:19:17.274 } 00:19:17.274 } 00:19:17.274 } 00:19:17.274 ] 00:19:17.274 05:08:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:19:17.274 05:08:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:17.274 05:08:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:17.599 05:08:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:17.599 05:08:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:17.858 05:08:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:17.858 { 00:19:17.858 "name": "ftl0", 00:19:17.858 "aliases": [ 00:19:17.858 "7530738e-4cfd-417f-b87d-53757612b8c5" 00:19:17.858 ], 00:19:17.858 "product_name": "FTL disk", 00:19:17.858 "block_size": 4096, 00:19:17.858 "num_blocks": 23592960, 00:19:17.858 "uuid": "7530738e-4cfd-417f-b87d-53757612b8c5", 00:19:17.858 "assigned_rate_limits": { 00:19:17.858 "rw_ios_per_sec": 0, 00:19:17.858 "rw_mbytes_per_sec": 0, 00:19:17.858 "r_mbytes_per_sec": 0, 00:19:17.858 "w_mbytes_per_sec": 0 00:19:17.858 }, 00:19:17.858 "claimed": false, 00:19:17.858 "zoned": false, 00:19:17.858 "supported_io_types": { 00:19:17.858 "read": true, 00:19:17.858 "write": true, 00:19:17.858 "unmap": true, 00:19:17.858 "flush": true, 00:19:17.858 "reset": false, 00:19:17.858 "nvme_admin": false, 00:19:17.858 "nvme_io": false, 00:19:17.858 "nvme_io_md": false, 00:19:17.858 "write_zeroes": true, 00:19:17.858 "zcopy": false, 00:19:17.858 "get_zone_info": false, 00:19:17.858 "zone_management": false, 00:19:17.858 "zone_append": false, 00:19:17.858 "compare": false, 00:19:17.858 "compare_and_write": false, 00:19:17.858 "abort": false, 00:19:17.858 "seek_hole": false, 00:19:17.858 "seek_data": false, 00:19:17.858 "copy": false, 00:19:17.858 "nvme_iov_md": false 00:19:17.858 }, 00:19:17.858 "driver_specific": { 00:19:17.858 "ftl": { 00:19:17.858 "base_bdev": "ae850742-3021-4c2a-ac92-146a1a087ae3", 00:19:17.858 "cache": "nvc0n1p0" 00:19:17.858 } 00:19:17.858 } 00:19:17.858 } 00:19:17.858 ]' 00:19:17.858 05:08:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:17.858 05:08:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:17.858 05:08:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:18.117 [2024-07-24 05:08:32.628561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.628626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:18.117 [2024-07-24 05:08:32.628666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.117 [2024-07-24 05:08:32.628678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-07-24 05:08:32.628729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:18.117 [2024-07-24 05:08:32.632041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.632079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:18.117 [2024-07-24 05:08:32.632095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:19:18.117 [2024-07-24 05:08:32.632110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-07-24 05:08:32.632680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.632727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:18.117 [2024-07-24 05:08:32.632742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:19:18.117 [2024-07-24 05:08:32.632760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-07-24 05:08:32.636158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.636193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:18.117 [2024-07-24 05:08:32.636208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:19:18.117 [2024-07-24 05:08:32.636220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-07-24 05:08:32.642999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.643035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:18.117 [2024-07-24 05:08:32.643066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:19:18.117 [2024-07-24 05:08:32.643079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-07-24 05:08:32.671353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-07-24 05:08:32.671420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:18.118 [2024-07-24 05:08:32.671439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.176 ms 00:19:18.118 [2024-07-24 05:08:32.671456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.118 [2024-07-24 05:08:32.690276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.118 [2024-07-24 05:08:32.690343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:18.118 [2024-07-24 05:08:32.690380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.722 ms 00:19:18.118 [2024-07-24 05:08:32.690394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.118 [2024-07-24 05:08:32.690622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.118 [2024-07-24 05:08:32.690646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:18.118 [2024-07-24 05:08:32.690659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:18.118 [2024-07-24 05:08:32.690671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.118 [2024-07-24 05:08:32.719611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.118 [2024-07-24 05:08:32.719712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:18.118 [2024-07-24 05:08:32.719732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.890 ms 00:19:18.118 [2024-07-24 05:08:32.719745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-07-24 05:08:32.747492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-07-24 05:08:32.747584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:18.378 [2024-07-24 05:08:32.747605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.630 ms 00:19:18.378 [2024-07-24 05:08:32.747623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-07-24 05:08:32.776109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-07-24 05:08:32.776193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:18.378 [2024-07-24 05:08:32.776212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.364 ms 00:19:18.378 [2024-07-24 05:08:32.776225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-07-24 05:08:32.802988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-07-24 05:08:32.803048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:18.378 [2024-07-24 05:08:32.803065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.618 ms 00:19:18.378 [2024-07-24 05:08:32.803077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-07-24 05:08:32.803167] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:18.378 [2024-07-24 05:08:32.803196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:18.378 [2024-07-24 05:08:32.803457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.803999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:18.379 [2024-07-24 05:08:32.804559] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:18.380 [2024-07-24 05:08:32.804571] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:18.380 [2024-07-24 05:08:32.804586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:18.380 [2024-07-24 05:08:32.804599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:18.380 [2024-07-24 05:08:32.804612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:18.380 [2024-07-24 05:08:32.804623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:18.380 [2024-07-24 05:08:32.804635] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:18.380 [2024-07-24 05:08:32.804646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:18.380 [2024-07-24 05:08:32.804658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:18.380 [2024-07-24 05:08:32.804668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:18.380 [2024-07-24 05:08:32.804679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:18.380 [2024-07-24 05:08:32.804690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.380 [2024-07-24 05:08:32.804703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:18.380 [2024-07-24 05:08:32.804715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:19:18.380 [2024-07-24 05:08:32.804728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.819383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.380 [2024-07-24 05:08:32.819441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:18.380 [2024-07-24 05:08:32.819458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.615 ms 00:19:18.380 [2024-07-24 05:08:32.819473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.819949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.380 [2024-07-24 05:08:32.819977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:18.380 [2024-07-24 05:08:32.819991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:19:18.380 [2024-07-24 05:08:32.820004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.870593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.380 [2024-07-24 05:08:32.870682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.380 [2024-07-24 05:08:32.870702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.380 [2024-07-24 05:08:32.870730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.870924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.380 [2024-07-24 05:08:32.870971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.380 [2024-07-24 05:08:32.871001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.380 [2024-07-24 05:08:32.871015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.871110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.380 [2024-07-24 05:08:32.871134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.380 [2024-07-24 05:08:32.871147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.380 [2024-07-24 05:08:32.871163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.871205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.380 [2024-07-24 05:08:32.871222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.380 [2024-07-24 05:08:32.871234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.380 [2024-07-24 05:08:32.871273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.380 [2024-07-24 05:08:32.969078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.380 [2024-07-24 05:08:32.969155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.380 [2024-07-24 05:08:32.969173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.380 [2024-07-24 05:08:32.969186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.045371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.045451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.639 [2024-07-24 05:08:33.045469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.045483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.045599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.045624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.639 [2024-07-24 05:08:33.045636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.045650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.045709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.045726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.639 [2024-07-24 05:08:33.045737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.045749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.045904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.045929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.639 [2024-07-24 05:08:33.045961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.045974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.046046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.046068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:18.639 [2024-07-24 05:08:33.046080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.046092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.046154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.639 [2024-07-24 05:08:33.046171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.639 [2024-07-24 05:08:33.046217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.639 [2024-07-24 05:08:33.046232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.639 [2024-07-24 05:08:33.046299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.640 [2024-07-24 05:08:33.046319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.640 [2024-07-24 05:08:33.046331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.640 [2024-07-24 05:08:33.046343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.640 [2024-07-24 05:08:33.046554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.983 ms, result 0 00:19:18.640 true 00:19:18.640 05:08:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80000 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80000 ']' 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80000 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80000 00:19:18.640 killing process with pid 80000 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80000' 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80000 00:19:18.640 05:08:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80000 00:19:22.834 05:08:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:23.772 65536+0 records in 00:19:23.772 65536+0 records out 00:19:23.772 268435456 bytes (268 MB, 256 MiB) copied, 1.05419 s, 255 MB/s 00:19:23.772 05:08:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:24.031 [2024-07-24 05:08:38.469594] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:19:24.031 [2024-07-24 05:08:38.469754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80182 ] 00:19:24.031 [2024-07-24 05:08:38.631643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.291 [2024-07-24 05:08:38.839703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.550 [2024-07-24 05:08:39.114684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.550 [2024-07-24 05:08:39.114779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.813 [2024-07-24 05:08:39.275786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.813 [2024-07-24 05:08:39.275873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.813 [2024-07-24 05:08:39.275910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.813 [2024-07-24 05:08:39.275921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.813 [2024-07-24 05:08:39.278796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.813 [2024-07-24 05:08:39.279042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.813 [2024-07-24 05:08:39.279097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:19:24.813 [2024-07-24 05:08:39.279113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.813 [2024-07-24 05:08:39.279365] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.813 [2024-07-24 05:08:39.280293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.813 [2024-07-24 05:08:39.280327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.813 [2024-07-24 05:08:39.280355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.813 [2024-07-24 05:08:39.280367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:19:24.813 [2024-07-24 05:08:39.280377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.813 [2024-07-24 05:08:39.281607] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.813 [2024-07-24 05:08:39.295701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.813 [2024-07-24 05:08:39.295741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.813 [2024-07-24 05:08:39.295780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.095 ms 00:19:24.813 [2024-07-24 05:08:39.295803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.295945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.295983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.814 [2024-07-24 05:08:39.295996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:24.814 [2024-07-24 05:08:39.296006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.300529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.300571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.814 [2024-07-24 05:08:39.300602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:19:24.814 [2024-07-24 05:08:39.300612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.300727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.300747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.814 [2024-07-24 05:08:39.300759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:24.814 [2024-07-24 05:08:39.300779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.300818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.300833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.814 [2024-07-24 05:08:39.300847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:24.814 [2024-07-24 05:08:39.300893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.300929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.814 [2024-07-24 05:08:39.304852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.304886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.814 [2024-07-24 05:08:39.304916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:19:24.814 [2024-07-24 05:08:39.304926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.304990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.305007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.814 [2024-07-24 05:08:39.305019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:24.814 [2024-07-24 05:08:39.305028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.305052] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.814 [2024-07-24 05:08:39.305077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:24.814 [2024-07-24 05:08:39.305119] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.814 [2024-07-24 05:08:39.305138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:24.814 [2024-07-24 05:08:39.305226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.814 [2024-07-24 05:08:39.305240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.814 [2024-07-24 05:08:39.305253] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:24.814 [2024-07-24 05:08:39.305267] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.814 [2024-07-24 05:08:39.305279] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.814 [2024-07-24 05:08:39.305294] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.814 [2024-07-24 05:08:39.305304] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.814 [2024-07-24 05:08:39.305313] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.814 [2024-07-24 05:08:39.305323] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.814 [2024-07-24 05:08:39.305334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.305344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.814 [2024-07-24 05:08:39.305354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:24.814 [2024-07-24 05:08:39.305364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.305455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.814 [2024-07-24 05:08:39.305470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.814 [2024-07-24 05:08:39.305486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:24.814 [2024-07-24 05:08:39.305496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.814 [2024-07-24 05:08:39.305588] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.814 [2024-07-24 05:08:39.305603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.814 [2024-07-24 05:08:39.305614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.814 [2024-07-24 05:08:39.305625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.814 [2024-07-24 05:08:39.305634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.814 [2024-07-24 05:08:39.305643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.814 [2024-07-24 05:08:39.305652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.814 [2024-07-24 05:08:39.305663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.814 [2024-07-24 05:08:39.305673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.814 [2024-07-24 05:08:39.305682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.814 [2024-07-24 05:08:39.305706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.814 [2024-07-24 05:08:39.305716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.814 [2024-07-24 05:08:39.305725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.814 [2024-07-24 05:08:39.305734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.814 [2024-07-24 05:08:39.305743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.815 [2024-07-24 05:08:39.305752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.815 [2024-07-24 05:08:39.305771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.815 [2024-07-24 05:08:39.305794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.815 [2024-07-24 05:08:39.305814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.815 [2024-07-24 05:08:39.305832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.815 [2024-07-24 05:08:39.305841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.815 [2024-07-24 05:08:39.305859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.815 [2024-07-24 05:08:39.305889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.815 [2024-07-24 05:08:39.305928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.815 [2024-07-24 05:08:39.305938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.815 [2024-07-24 05:08:39.305956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.815 [2024-07-24 05:08:39.305966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.815 [2024-07-24 05:08:39.305975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.815 [2024-07-24 05:08:39.305985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.815 [2024-07-24 05:08:39.306011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.815 [2024-07-24 05:08:39.306020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.815 [2024-07-24 05:08:39.306030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.815 [2024-07-24 05:08:39.306040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.815 [2024-07-24 05:08:39.306049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.815 [2024-07-24 05:08:39.306059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.815 [2024-07-24 05:08:39.306070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.815 [2024-07-24 05:08:39.306080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.815 [2024-07-24 05:08:39.306089] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.815 [2024-07-24 05:08:39.306115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.815 [2024-07-24 05:08:39.306125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.815 [2024-07-24 05:08:39.306135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.815 [2024-07-24 05:08:39.306150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.815 [2024-07-24 05:08:39.306160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.815 [2024-07-24 05:08:39.306170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.815 [2024-07-24 05:08:39.306180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.815 [2024-07-24 05:08:39.306206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.815 [2024-07-24 05:08:39.306232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.815 [2024-07-24 05:08:39.306243] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.815 [2024-07-24 05:08:39.306283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.815 [2024-07-24 05:08:39.306296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.815 [2024-07-24 05:08:39.306323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.815 [2024-07-24 05:08:39.306334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.815 [2024-07-24 05:08:39.306345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.815 [2024-07-24 05:08:39.306356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.815 [2024-07-24 05:08:39.306367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.815 [2024-07-24 05:08:39.306377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.815 [2024-07-24 05:08:39.306388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.815 [2024-07-24 05:08:39.306400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.815 [2024-07-24 05:08:39.306412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.815 [2024-07-24 05:08:39.306423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.815 [2024-07-24 05:08:39.306434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.816 [2024-07-24 05:08:39.306445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.816 [2024-07-24 05:08:39.306456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.816 [2024-07-24 05:08:39.306467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.816 [2024-07-24 05:08:39.306480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.816 [2024-07-24 05:08:39.306492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.816 [2024-07-24 05:08:39.306503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.816 [2024-07-24 05:08:39.306515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.816 [2024-07-24 05:08:39.306536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.816 [2024-07-24 05:08:39.306548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.306559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.816 [2024-07-24 05:08:39.306570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:19:24.816 [2024-07-24 05:08:39.306580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.343685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.343744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.816 [2024-07-24 05:08:39.343785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.008 ms 00:19:24.816 [2024-07-24 05:08:39.343797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.344033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.344056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.816 [2024-07-24 05:08:39.344075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:24.816 [2024-07-24 05:08:39.344086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.377308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.377357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.816 [2024-07-24 05:08:39.377390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.189 ms 00:19:24.816 [2024-07-24 05:08:39.377417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.377588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.377608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.816 [2024-07-24 05:08:39.377621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.816 [2024-07-24 05:08:39.377632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.377995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.378014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.816 [2024-07-24 05:08:39.378027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:19:24.816 [2024-07-24 05:08:39.378038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.378239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.378258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.816 [2024-07-24 05:08:39.378270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:24.816 [2024-07-24 05:08:39.378281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.393537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.393576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.816 [2024-07-24 05:08:39.393608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.227 ms 00:19:24.816 [2024-07-24 05:08:39.393619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.407954] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:24.816 [2024-07-24 05:08:39.407997] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.816 [2024-07-24 05:08:39.408032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.408044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.816 [2024-07-24 05:08:39.408055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:19:24.816 [2024-07-24 05:08:39.408069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.816 [2024-07-24 05:08:39.433084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.816 [2024-07-24 05:08:39.433124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.816 [2024-07-24 05:08:39.433156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.911 ms 00:19:24.816 [2024-07-24 05:08:39.433166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.084 [2024-07-24 05:08:39.450013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.084 [2024-07-24 05:08:39.450059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:25.084 [2024-07-24 05:08:39.450078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.774 ms 00:19:25.084 [2024-07-24 05:08:39.450090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.084 [2024-07-24 05:08:39.465922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.084 [2024-07-24 05:08:39.465988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:25.084 [2024-07-24 05:08:39.466020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.733 ms 00:19:25.084 [2024-07-24 05:08:39.466041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.084 [2024-07-24 05:08:39.466922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.084 [2024-07-24 05:08:39.466999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.084 [2024-07-24 05:08:39.467030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:19:25.084 [2024-07-24 05:08:39.467042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.084 [2024-07-24 05:08:39.530122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.084 [2024-07-24 05:08:39.530197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:25.085 [2024-07-24 05:08:39.530232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.031 ms 00:19:25.085 [2024-07-24 05:08:39.530243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.085 [2024-07-24 05:08:39.541165] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:25.085 [2024-07-24 05:08:39.553749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.085 [2024-07-24 05:08:39.553816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.085 [2024-07-24 05:08:39.553851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:19:25.085 [2024-07-24 05:08:39.553895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.085 [2024-07-24 05:08:39.554052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.085 [2024-07-24 05:08:39.554073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:25.085 [2024-07-24 05:08:39.554091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:25.085 [2024-07-24 05:08:39.554102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.085 [2024-07-24 05:08:39.554182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.085 [2024-07-24 05:08:39.554199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.085 [2024-07-24 05:08:39.554211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:25.085 [2024-07-24 05:08:39.554221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.085 [2024-07-24 05:08:39.554252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.085 [2024-07-24 05:08:39.554266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.085 [2024-07-24 05:08:39.554278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:25.085 [2024-07-24 05:08:39.554308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.085 [2024-07-24 05:08:39.554380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:25.085 [2024-07-24 05:08:39.554396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.085 [2024-07-24 05:08:39.554406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:25.085 [2024-07-24 05:08:39.554418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:25.086 [2024-07-24 05:08:39.554428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.086 [2024-07-24 05:08:39.581423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.086 [2024-07-24 05:08:39.581464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.086 [2024-07-24 05:08:39.581503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.969 ms 00:19:25.086 [2024-07-24 05:08:39.581514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.086 [2024-07-24 05:08:39.581632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.086 [2024-07-24 05:08:39.581651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.086 [2024-07-24 05:08:39.581664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:25.086 [2024-07-24 05:08:39.581675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.086 [2024-07-24 05:08:39.582947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.086 [2024-07-24 05:08:39.586967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.729 ms, result 0 00:19:25.086 [2024-07-24 05:08:39.587956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.086 [2024-07-24 05:08:39.604679] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.936  Copying: 22/256 [MB] (22 MBps) Copying: 44/256 [MB] (21 MBps) Copying: 65/256 [MB] (21 MBps) Copying: 87/256 [MB] (22 MBps) Copying: 109/256 [MB] (21 MBps) Copying: 131/256 [MB] (21 MBps) Copying: 153/256 [MB] (22 MBps) Copying: 175/256 [MB] (21 MBps) Copying: 197/256 [MB] (22 MBps) Copying: 219/256 [MB] (22 MBps) Copying: 241/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-24 05:08:51.328702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.936 [2024-07-24 05:08:51.343785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.343834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.936 [2024-07-24 05:08:51.343880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.936 [2024-07-24 05:08:51.343896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.343936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:36.936 [2024-07-24 05:08:51.347917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.347962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.936 [2024-07-24 05:08:51.347980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:19:36.936 [2024-07-24 05:08:51.347994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.349771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.349824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.936 [2024-07-24 05:08:51.349858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:19:36.936 [2024-07-24 05:08:51.349875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.358248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.358302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.936 [2024-07-24 05:08:51.358321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 00:19:36.936 [2024-07-24 05:08:51.358344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.367537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.367594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.936 [2024-07-24 05:08:51.367613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.116 ms 00:19:36.936 [2024-07-24 05:08:51.367626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.405350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.405396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.936 [2024-07-24 05:08:51.405416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.659 ms 00:19:36.936 [2024-07-24 05:08:51.405430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.426876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.426930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.936 [2024-07-24 05:08:51.426950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.364 ms 00:19:36.936 [2024-07-24 05:08:51.426964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.427151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.427175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.936 [2024-07-24 05:08:51.427191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:36.936 [2024-07-24 05:08:51.427205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.458232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.458283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:36.936 [2024-07-24 05:08:51.458297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.999 ms 00:19:36.936 [2024-07-24 05:08:51.458307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.485384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.485435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:36.936 [2024-07-24 05:08:51.485449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.018 ms 00:19:36.936 [2024-07-24 05:08:51.485458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.512016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.512066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.936 [2024-07-24 05:08:51.512095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.494 ms 00:19:36.936 [2024-07-24 05:08:51.512105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.538439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.936 [2024-07-24 05:08:51.538489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.936 [2024-07-24 05:08:51.538502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.248 ms 00:19:36.936 [2024-07-24 05:08:51.538512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.936 [2024-07-24 05:08:51.538569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.936 [2024-07-24 05:08:51.538592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.936 [2024-07-24 05:08:51.538728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.538999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.937 [2024-07-24 05:08:51.539632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.938 [2024-07-24 05:08:51.539761] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.938 [2024-07-24 05:08:51.539771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:36.938 [2024-07-24 05:08:51.539782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.938 [2024-07-24 05:08:51.539791] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.938 [2024-07-24 05:08:51.539801] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.938 [2024-07-24 05:08:51.539825] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.938 [2024-07-24 05:08:51.539835] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.938 [2024-07-24 05:08:51.539845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.938 [2024-07-24 05:08:51.539854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.938 [2024-07-24 05:08:51.539864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.938 [2024-07-24 05:08:51.539873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.938 [2024-07-24 05:08:51.539895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.938 [2024-07-24 05:08:51.539906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.938 [2024-07-24 05:08:51.539917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:19:36.938 [2024-07-24 05:08:51.539933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.938 [2024-07-24 05:08:51.554157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.938 [2024-07-24 05:08:51.554205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.938 [2024-07-24 05:08:51.554220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.199 ms 00:19:36.938 [2024-07-24 05:08:51.554230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.938 [2024-07-24 05:08:51.554619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.938 [2024-07-24 05:08:51.554641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.938 [2024-07-24 05:08:51.554660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:19:36.938 [2024-07-24 05:08:51.554671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.590056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.590111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.197 [2024-07-24 05:08:51.590125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.590135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.590218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.590234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.197 [2024-07-24 05:08:51.590249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.590259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.590312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.590329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.197 [2024-07-24 05:08:51.590339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.590349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.590371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.590384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.197 [2024-07-24 05:08:51.590394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.590410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.681397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.681469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.197 [2024-07-24 05:08:51.681486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.681495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.762690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.762757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.197 [2024-07-24 05:08:51.762782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.762792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.762925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.762944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.197 [2024-07-24 05:08:51.762955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.762966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.763000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.763012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.197 [2024-07-24 05:08:51.763024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.763033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.763202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.763238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.197 [2024-07-24 05:08:51.763276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.197 [2024-07-24 05:08:51.763289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.197 [2024-07-24 05:08:51.763342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.197 [2024-07-24 05:08:51.763359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.197 [2024-07-24 05:08:51.763372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.198 [2024-07-24 05:08:51.763383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.198 [2024-07-24 05:08:51.763436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.198 [2024-07-24 05:08:51.763452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.198 [2024-07-24 05:08:51.763464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.198 [2024-07-24 05:08:51.763474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.198 [2024-07-24 05:08:51.763530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.198 [2024-07-24 05:08:51.763547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.198 [2024-07-24 05:08:51.763574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.198 [2024-07-24 05:08:51.763585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.198 [2024-07-24 05:08:51.763761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.979 ms, result 0 00:19:38.575 00:19:38.575 00:19:38.575 05:08:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80335 00:19:38.575 05:08:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:38.575 05:08:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80335 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80335 ']' 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.575 05:08:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:38.575 [2024-07-24 05:08:52.970332] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:19:38.575 [2024-07-24 05:08:52.970501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80335 ] 00:19:38.575 [2024-07-24 05:08:53.125125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.835 [2024-07-24 05:08:53.282315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.401 05:08:53 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.401 05:08:53 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:39.401 05:08:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:39.660 [2024-07-24 05:08:54.185481] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.660 [2024-07-24 05:08:54.185570] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.920 [2024-07-24 05:08:54.361393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.920 [2024-07-24 05:08:54.361450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.920 [2024-07-24 05:08:54.361485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.921 [2024-07-24 05:08:54.361498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.364971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.365034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.921 [2024-07-24 05:08:54.365050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:19:39.921 [2024-07-24 05:08:54.365065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.365372] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.921 [2024-07-24 05:08:54.366384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.921 [2024-07-24 05:08:54.366421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.366457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.921 [2024-07-24 05:08:54.366470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:19:39.921 [2024-07-24 05:08:54.366491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.368052] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.921 [2024-07-24 05:08:54.382587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.382637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.921 [2024-07-24 05:08:54.382678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.526 ms 00:19:39.921 [2024-07-24 05:08:54.382691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.382873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.382895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.921 [2024-07-24 05:08:54.382914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:39.921 [2024-07-24 05:08:54.382926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.387464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.387506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.921 [2024-07-24 05:08:54.387551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:19:39.921 [2024-07-24 05:08:54.387564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.387800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.387823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.921 [2024-07-24 05:08:54.387854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:39.921 [2024-07-24 05:08:54.387877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.387925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.387948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.921 [2024-07-24 05:08:54.387968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:39.921 [2024-07-24 05:08:54.387980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.388020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:39.921 [2024-07-24 05:08:54.391886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.391942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.921 [2024-07-24 05:08:54.391956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:19:39.921 [2024-07-24 05:08:54.391972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.392034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.392111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.921 [2024-07-24 05:08:54.392130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:39.921 [2024-07-24 05:08:54.392146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.392175] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.921 [2024-07-24 05:08:54.392212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.921 [2024-07-24 05:08:54.392266] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.921 [2024-07-24 05:08:54.392299] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:39.921 [2024-07-24 05:08:54.392399] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.921 [2024-07-24 05:08:54.392440] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.921 [2024-07-24 05:08:54.392457] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:39.921 [2024-07-24 05:08:54.392477] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.921 [2024-07-24 05:08:54.392491] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.921 [2024-07-24 05:08:54.392507] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:39.921 [2024-07-24 05:08:54.392519] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.921 [2024-07-24 05:08:54.392534] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.921 [2024-07-24 05:08:54.392546] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.921 [2024-07-24 05:08:54.392567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.392579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.921 [2024-07-24 05:08:54.392595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:19:39.921 [2024-07-24 05:08:54.392611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.392707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.921 [2024-07-24 05:08:54.392745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.921 [2024-07-24 05:08:54.392764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:39.921 [2024-07-24 05:08:54.392776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.921 [2024-07-24 05:08:54.392938] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.921 [2024-07-24 05:08:54.392969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.921 [2024-07-24 05:08:54.392989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.921 [2024-07-24 05:08:54.393040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.921 [2024-07-24 05:08:54.393088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.921 [2024-07-24 05:08:54.393129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.921 [2024-07-24 05:08:54.393142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:39.921 [2024-07-24 05:08:54.393157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.921 [2024-07-24 05:08:54.393168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.921 [2024-07-24 05:08:54.393183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:39.921 [2024-07-24 05:08:54.393194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.921 [2024-07-24 05:08:54.393221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.921 [2024-07-24 05:08:54.393265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.921 [2024-07-24 05:08:54.393303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.921 [2024-07-24 05:08:54.393349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.921 [2024-07-24 05:08:54.393402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:39.921 [2024-07-24 05:08:54.393417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.921 [2024-07-24 05:08:54.393428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.922 [2024-07-24 05:08:54.393443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:39.922 [2024-07-24 05:08:54.393454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.922 [2024-07-24 05:08:54.393469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.922 [2024-07-24 05:08:54.393496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:39.922 [2024-07-24 05:08:54.393511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.922 [2024-07-24 05:08:54.393523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.922 [2024-07-24 05:08:54.393539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:39.922 [2024-07-24 05:08:54.393550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.922 [2024-07-24 05:08:54.393570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.922 [2024-07-24 05:08:54.393582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:39.922 [2024-07-24 05:08:54.393597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.922 [2024-07-24 05:08:54.393607] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.922 [2024-07-24 05:08:54.393624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.922 [2024-07-24 05:08:54.393636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.922 [2024-07-24 05:08:54.393652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.922 [2024-07-24 05:08:54.393665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.922 [2024-07-24 05:08:54.393681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.922 [2024-07-24 05:08:54.393693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.922 [2024-07-24 05:08:54.393708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.922 [2024-07-24 05:08:54.393720] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.922 [2024-07-24 05:08:54.393736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.922 [2024-07-24 05:08:54.393749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.922 [2024-07-24 05:08:54.393769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.393782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:39.922 [2024-07-24 05:08:54.393805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:39.922 [2024-07-24 05:08:54.393832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:39.922 [2024-07-24 05:08:54.393876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:39.922 [2024-07-24 05:08:54.393908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:39.922 [2024-07-24 05:08:54.393925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:39.922 [2024-07-24 05:08:54.393938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:39.922 [2024-07-24 05:08:54.393955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:39.922 [2024-07-24 05:08:54.393967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:39.922 [2024-07-24 05:08:54.393985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.393998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.394015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.394027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.394045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:39.922 [2024-07-24 05:08:54.394058] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.922 [2024-07-24 05:08:54.394076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.394090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.922 [2024-07-24 05:08:54.394111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.922 [2024-07-24 05:08:54.394124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.922 [2024-07-24 05:08:54.394140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.922 [2024-07-24 05:08:54.394154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.394172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.922 [2024-07-24 05:08:54.394185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:19:39.922 [2024-07-24 05:08:54.394224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.426552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.426657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.922 [2024-07-24 05:08:54.426699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.229 ms 00:19:39.922 [2024-07-24 05:08:54.426717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.427034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.427073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.922 [2024-07-24 05:08:54.427091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:39.922 [2024-07-24 05:08:54.427124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.462080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.462175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.922 [2024-07-24 05:08:54.462208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.923 ms 00:19:39.922 [2024-07-24 05:08:54.462224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.462418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.462453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.922 [2024-07-24 05:08:54.462469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.922 [2024-07-24 05:08:54.462486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.462835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.462927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.922 [2024-07-24 05:08:54.462943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:39.922 [2024-07-24 05:08:54.462960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.463111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.463145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.922 [2024-07-24 05:08:54.463160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:19:39.922 [2024-07-24 05:08:54.463176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.480100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.480164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.922 [2024-07-24 05:08:54.480197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:19:39.922 [2024-07-24 05:08:54.480215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.494593] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:39.922 [2024-07-24 05:08:54.494671] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:39.922 [2024-07-24 05:08:54.494694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.494711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:39.922 [2024-07-24 05:08:54.494724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.286 ms 00:19:39.922 [2024-07-24 05:08:54.494739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.521291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.521372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.922 [2024-07-24 05:08:54.521390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.401 ms 00:19:39.922 [2024-07-24 05:08:54.521414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.534883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.922 [2024-07-24 05:08:54.534964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.922 [2024-07-24 05:08:54.534994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.348 ms 00:19:39.922 [2024-07-24 05:08:54.535015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.922 [2024-07-24 05:08:54.549135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.923 [2024-07-24 05:08:54.549213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.923 [2024-07-24 05:08:54.549230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:19:39.923 [2024-07-24 05:08:54.549246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.550332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.550402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.182 [2024-07-24 05:08:54.550433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:19:40.182 [2024-07-24 05:08:54.550450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.622390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.622501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.182 [2024-07-24 05:08:54.622527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.893 ms 00:19:40.182 [2024-07-24 05:08:54.622544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.633352] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.182 [2024-07-24 05:08:54.645858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.645936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.182 [2024-07-24 05:08:54.645983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.147 ms 00:19:40.182 [2024-07-24 05:08:54.645996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.646133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.646153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.182 [2024-07-24 05:08:54.646203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:40.182 [2024-07-24 05:08:54.646231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.646316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.646348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.182 [2024-07-24 05:08:54.646368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:40.182 [2024-07-24 05:08:54.646381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.646426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.646442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.182 [2024-07-24 05:08:54.646459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:40.182 [2024-07-24 05:08:54.646472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.646521] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.182 [2024-07-24 05:08:54.646538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.646559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.182 [2024-07-24 05:08:54.646578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:40.182 [2024-07-24 05:08:54.646595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.673027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.182 [2024-07-24 05:08:54.673110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.182 [2024-07-24 05:08:54.673128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.401 ms 00:19:40.182 [2024-07-24 05:08:54.673145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.182 [2024-07-24 05:08:54.673354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.183 [2024-07-24 05:08:54.673406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.183 [2024-07-24 05:08:54.673423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:40.183 [2024-07-24 05:08:54.673441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.183 [2024-07-24 05:08:54.674780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.183 [2024-07-24 05:08:54.679242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.907 ms, result 0 00:19:40.183 [2024-07-24 05:08:54.680387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.183 Some configs were skipped because the RPC state that can call them passed over. 00:19:40.183 05:08:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:40.442 [2024-07-24 05:08:54.946250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.442 [2024-07-24 05:08:54.946327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.442 [2024-07-24 05:08:54.946361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:19:40.442 [2024-07-24 05:08:54.946393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.442 [2024-07-24 05:08:54.946468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.897 ms, result 0 00:19:40.442 true 00:19:40.442 05:08:54 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:40.702 [2024-07-24 05:08:55.186031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.702 [2024-07-24 05:08:55.186148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.702 [2024-07-24 05:08:55.186170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:19:40.702 [2024-07-24 05:08:55.186217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.702 [2024-07-24 05:08:55.186275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.421 ms, result 0 00:19:40.702 true 00:19:40.702 05:08:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80335 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80335 ']' 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80335 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80335 00:19:40.702 killing process with pid 80335 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80335' 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80335 00:19:40.702 05:08:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80335 00:19:41.649 [2024-07-24 05:08:56.119318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.119385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.649 [2024-07-24 05:08:56.119413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.649 [2024-07-24 05:08:56.119426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.119460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:41.649 [2024-07-24 05:08:56.122688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.122739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.649 [2024-07-24 05:08:56.122770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:19:41.649 [2024-07-24 05:08:56.122786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.123112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.123137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.649 [2024-07-24 05:08:56.123151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:41.649 [2024-07-24 05:08:56.123165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.127146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.127200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.649 [2024-07-24 05:08:56.127218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:19:41.649 [2024-07-24 05:08:56.127232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.134176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.134247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.649 [2024-07-24 05:08:56.134279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:19:41.649 [2024-07-24 05:08:56.134309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.146393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.146483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.649 [2024-07-24 05:08:56.146501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.024 ms 00:19:41.649 [2024-07-24 05:08:56.146517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.154817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.154905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.649 [2024-07-24 05:08:56.154924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.253 ms 00:19:41.649 [2024-07-24 05:08:56.154938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.155080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.155103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.649 [2024-07-24 05:08:56.155117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:41.649 [2024-07-24 05:08:56.155145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.167799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.167904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:41.649 [2024-07-24 05:08:56.167922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:19:41.649 [2024-07-24 05:08:56.167936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.180105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.180176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:41.649 [2024-07-24 05:08:56.180192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.123 ms 00:19:41.649 [2024-07-24 05:08:56.180209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.191977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.192050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.649 [2024-07-24 05:08:56.192066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.724 ms 00:19:41.649 [2024-07-24 05:08:56.192078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.204358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.649 [2024-07-24 05:08:56.204431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.649 [2024-07-24 05:08:56.204446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:19:41.649 [2024-07-24 05:08:56.204459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.649 [2024-07-24 05:08:56.204502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.649 [2024-07-24 05:08:56.204533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.204995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.649 [2024-07-24 05:08:56.205153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.205994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.650 [2024-07-24 05:08:56.206096] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.650 [2024-07-24 05:08:56.206109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:41.650 [2024-07-24 05:08:56.206131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.650 [2024-07-24 05:08:56.206144] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.650 [2024-07-24 05:08:56.206160] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.650 [2024-07-24 05:08:56.206173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.650 [2024-07-24 05:08:56.206189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.650 [2024-07-24 05:08:56.206202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.650 [2024-07-24 05:08:56.206219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.650 [2024-07-24 05:08:56.206230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.650 [2024-07-24 05:08:56.206264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.650 [2024-07-24 05:08:56.206277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.650 [2024-07-24 05:08:56.206295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.650 [2024-07-24 05:08:56.206314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:19:41.650 [2024-07-24 05:08:56.206331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.650 [2024-07-24 05:08:56.222370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.650 [2024-07-24 05:08:56.222433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.650 [2024-07-24 05:08:56.222463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.982 ms 00:19:41.650 [2024-07-24 05:08:56.222486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.650 [2024-07-24 05:08:56.223043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.650 [2024-07-24 05:08:56.223090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.650 [2024-07-24 05:08:56.223122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:41.650 [2024-07-24 05:08:56.223156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.650 [2024-07-24 05:08:56.277883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.650 [2024-07-24 05:08:56.277976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.650 [2024-07-24 05:08:56.278006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.278024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.278207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.278244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.910 [2024-07-24 05:08:56.278273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.278290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.278359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.278389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.910 [2024-07-24 05:08:56.278404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.278426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.278454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.278474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.910 [2024-07-24 05:08:56.278493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.278510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.379838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.379974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.910 [2024-07-24 05:08:56.379996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.380015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.465883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.910 [2024-07-24 05:08:56.466045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.910 [2024-07-24 05:08:56.466224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.910 [2024-07-24 05:08:56.466335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.910 [2024-07-24 05:08:56.466532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.910 [2024-07-24 05:08:56.466668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.910 [2024-07-24 05:08:56.466788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.910 [2024-07-24 05:08:56.466808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.910 [2024-07-24 05:08:56.466903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.910 [2024-07-24 05:08:56.466931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.910 [2024-07-24 05:08:56.466946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.911 [2024-07-24 05:08:56.466969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.911 [2024-07-24 05:08:56.467197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.848 ms, result 0 00:19:42.848 05:08:57 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:42.848 05:08:57 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.848 [2024-07-24 05:08:57.437443] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:19:42.848 [2024-07-24 05:08:57.437633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80392 ] 00:19:43.108 [2024-07-24 05:08:57.605649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.367 [2024-07-24 05:08:57.772596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.627 [2024-07-24 05:08:58.062424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.627 [2024-07-24 05:08:58.062531] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.627 [2024-07-24 05:08:58.221709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.221783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.627 [2024-07-24 05:08:58.221819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:43.627 [2024-07-24 05:08:58.221830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.224876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.224945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.627 [2024-07-24 05:08:58.224992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:19:43.627 [2024-07-24 05:08:58.225007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.225157] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.627 [2024-07-24 05:08:58.226113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.627 [2024-07-24 05:08:58.226168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.226216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.627 [2024-07-24 05:08:58.226228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:19:43.627 [2024-07-24 05:08:58.226239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.227712] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.627 [2024-07-24 05:08:58.242068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.242125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.627 [2024-07-24 05:08:58.242164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.357 ms 00:19:43.627 [2024-07-24 05:08:58.242175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.242288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.242309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.627 [2024-07-24 05:08:58.242322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:43.627 [2024-07-24 05:08:58.242333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.246800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.627 [2024-07-24 05:08:58.246878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.627 [2024-07-24 05:08:58.246894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.364 ms 00:19:43.627 [2024-07-24 05:08:58.246905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.627 [2024-07-24 05:08:58.247015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.247035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.628 [2024-07-24 05:08:58.247048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:43.628 [2024-07-24 05:08:58.247059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.247131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.247163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.628 [2024-07-24 05:08:58.247180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:43.628 [2024-07-24 05:08:58.247192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.247227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:43.628 [2024-07-24 05:08:58.251071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.251121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.628 [2024-07-24 05:08:58.251152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.853 ms 00:19:43.628 [2024-07-24 05:08:58.251163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.251229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.251248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.628 [2024-07-24 05:08:58.251288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:43.628 [2024-07-24 05:08:58.251300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.251336] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.628 [2024-07-24 05:08:58.251366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:43.628 [2024-07-24 05:08:58.251422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.628 [2024-07-24 05:08:58.251443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:43.628 [2024-07-24 05:08:58.251551] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.628 [2024-07-24 05:08:58.251568] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.628 [2024-07-24 05:08:58.251584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:43.628 [2024-07-24 05:08:58.251615] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.628 [2024-07-24 05:08:58.251629] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.628 [2024-07-24 05:08:58.251662] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:43.628 [2024-07-24 05:08:58.251673] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.628 [2024-07-24 05:08:58.251684] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.628 [2024-07-24 05:08:58.251695] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.628 [2024-07-24 05:08:58.251707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.251718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.628 [2024-07-24 05:08:58.251730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:19:43.628 [2024-07-24 05:08:58.251741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.251833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.628 [2024-07-24 05:08:58.251865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.628 [2024-07-24 05:08:58.251883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:43.628 [2024-07-24 05:08:58.251894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.628 [2024-07-24 05:08:58.252019] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.628 [2024-07-24 05:08:58.252037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.628 [2024-07-24 05:08:58.252050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.628 [2024-07-24 05:08:58.252085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.628 [2024-07-24 05:08:58.252117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.628 [2024-07-24 05:08:58.252139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.628 [2024-07-24 05:08:58.252166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:43.628 [2024-07-24 05:08:58.252177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.628 [2024-07-24 05:08:58.252188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.628 [2024-07-24 05:08:58.252199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:43.628 [2024-07-24 05:08:58.252211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.628 [2024-07-24 05:08:58.252233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.628 [2024-07-24 05:08:58.252281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.628 [2024-07-24 05:08:58.252313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.628 [2024-07-24 05:08:58.252345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.628 [2024-07-24 05:08:58.252377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.628 [2024-07-24 05:08:58.252409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.628 [2024-07-24 05:08:58.252430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.628 [2024-07-24 05:08:58.252441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:43.628 [2024-07-24 05:08:58.252452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.628 [2024-07-24 05:08:58.252463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.628 [2024-07-24 05:08:58.252474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:43.628 [2024-07-24 05:08:58.252484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.628 [2024-07-24 05:08:58.252506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:43.628 [2024-07-24 05:08:58.252517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252528] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.628 [2024-07-24 05:08:58.252540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.628 [2024-07-24 05:08:58.252551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.628 [2024-07-24 05:08:58.252580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.628 [2024-07-24 05:08:58.252592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.628 [2024-07-24 05:08:58.252602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.628 [2024-07-24 05:08:58.252613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.628 [2024-07-24 05:08:58.252624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.628 [2024-07-24 05:08:58.252650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.628 [2024-07-24 05:08:58.252663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.628 [2024-07-24 05:08:58.252677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.628 [2024-07-24 05:08:58.252705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:43.628 [2024-07-24 05:08:58.252717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:43.628 [2024-07-24 05:08:58.252745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:43.628 [2024-07-24 05:08:58.252757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:43.628 [2024-07-24 05:08:58.252768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:43.628 [2024-07-24 05:08:58.252780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:43.628 [2024-07-24 05:08:58.252791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:43.628 [2024-07-24 05:08:58.252803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:43.629 [2024-07-24 05:08:58.252814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:43.629 [2024-07-24 05:08:58.252825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:43.629 [2024-07-24 05:08:58.252900] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.629 [2024-07-24 05:08:58.252939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.629 [2024-07-24 05:08:58.252971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.629 [2024-07-24 05:08:58.252983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.629 [2024-07-24 05:08:58.252995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.629 [2024-07-24 05:08:58.253009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.629 [2024-07-24 05:08:58.253022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.629 [2024-07-24 05:08:58.253036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:19:43.629 [2024-07-24 05:08:58.253048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.293815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.293896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.888 [2024-07-24 05:08:58.293939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.664 ms 00:19:43.888 [2024-07-24 05:08:58.293951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.294138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.294175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:43.888 [2024-07-24 05:08:58.294237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:43.888 [2024-07-24 05:08:58.294249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.327406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.327475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.888 [2024-07-24 05:08:58.327511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.122 ms 00:19:43.888 [2024-07-24 05:08:58.327523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.327705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.327724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.888 [2024-07-24 05:08:58.327738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:43.888 [2024-07-24 05:08:58.327749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.328174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.328203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.888 [2024-07-24 05:08:58.328218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:19:43.888 [2024-07-24 05:08:58.328230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.328409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.328440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.888 [2024-07-24 05:08:58.328454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:43.888 [2024-07-24 05:08:58.328465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.343065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.343120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.888 [2024-07-24 05:08:58.343153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:19:43.888 [2024-07-24 05:08:58.343165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.357455] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:43.888 [2024-07-24 05:08:58.357510] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:43.888 [2024-07-24 05:08:58.357543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.357556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:43.888 [2024-07-24 05:08:58.357568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.196 ms 00:19:43.888 [2024-07-24 05:08:58.357578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.385948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.386004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:43.888 [2024-07-24 05:08:58.386037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.280 ms 00:19:43.888 [2024-07-24 05:08:58.386049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.400673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.400753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:43.888 [2024-07-24 05:08:58.400786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.504 ms 00:19:43.888 [2024-07-24 05:08:58.400797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.415145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.888 [2024-07-24 05:08:58.415195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:43.888 [2024-07-24 05:08:58.415226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.226 ms 00:19:43.888 [2024-07-24 05:08:58.415236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.888 [2024-07-24 05:08:58.416183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.416232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:43.889 [2024-07-24 05:08:58.416263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:19:43.889 [2024-07-24 05:08:58.416274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.477235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.477319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.889 [2024-07-24 05:08:58.477354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.929 ms 00:19:43.889 [2024-07-24 05:08:58.477365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.488275] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:43.889 [2024-07-24 05:08:58.501241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.501334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.889 [2024-07-24 05:08:58.501370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:19:43.889 [2024-07-24 05:08:58.501381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.501520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.501540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.889 [2024-07-24 05:08:58.501552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:43.889 [2024-07-24 05:08:58.501563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.501642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.501675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.889 [2024-07-24 05:08:58.501687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:43.889 [2024-07-24 05:08:58.501698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.501732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.501755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.889 [2024-07-24 05:08:58.501767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.889 [2024-07-24 05:08:58.501778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.889 [2024-07-24 05:08:58.501815] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.889 [2024-07-24 05:08:58.501831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.889 [2024-07-24 05:08:58.501843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.889 [2024-07-24 05:08:58.501872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:43.889 [2024-07-24 05:08:58.501899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.148 [2024-07-24 05:08:58.532783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.149 [2024-07-24 05:08:58.532870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.149 [2024-07-24 05:08:58.532907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.837 ms 00:19:44.149 [2024-07-24 05:08:58.532919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.149 [2024-07-24 05:08:58.533071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.149 [2024-07-24 05:08:58.533109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.149 [2024-07-24 05:08:58.533139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:44.149 [2024-07-24 05:08:58.533152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.149 [2024-07-24 05:08:58.534350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.149 [2024-07-24 05:08:58.538635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.240 ms, result 0 00:19:44.149 [2024-07-24 05:08:58.539497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.149 [2024-07-24 05:08:58.556304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.627  Copying: 24/256 [MB] (24 MBps) Copying: 46/256 [MB] (21 MBps) Copying: 68/256 [MB] (22 MBps) Copying: 91/256 [MB] (22 MBps) Copying: 111/256 [MB] (20 MBps) Copying: 132/256 [MB] (20 MBps) Copying: 153/256 [MB] (21 MBps) Copying: 174/256 [MB] (20 MBps) Copying: 196/256 [MB] (21 MBps) Copying: 218/256 [MB] (22 MBps) Copying: 240/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-24 05:09:10.245930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.888 [2024-07-24 05:09:10.258192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.258235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:55.888 [2024-07-24 05:09:10.258256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.888 [2024-07-24 05:09:10.258268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.258306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:55.888 [2024-07-24 05:09:10.261795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.261824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:55.888 [2024-07-24 05:09:10.261850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:19:55.888 [2024-07-24 05:09:10.261878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.262221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.262245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:55.888 [2024-07-24 05:09:10.262259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:55.888 [2024-07-24 05:09:10.262271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.265990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.266018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:55.888 [2024-07-24 05:09:10.266038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:19:55.888 [2024-07-24 05:09:10.266049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.273316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.273342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:55.888 [2024-07-24 05:09:10.273355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:19:55.888 [2024-07-24 05:09:10.273366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.301652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.301687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:55.888 [2024-07-24 05:09:10.301703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.225 ms 00:19:55.888 [2024-07-24 05:09:10.301713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.317683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.317718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:55.888 [2024-07-24 05:09:10.317732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.893 ms 00:19:55.888 [2024-07-24 05:09:10.317765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.317943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.317964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:55.888 [2024-07-24 05:09:10.317977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:55.888 [2024-07-24 05:09:10.317988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.345293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.345327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:55.888 [2024-07-24 05:09:10.345341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.283 ms 00:19:55.888 [2024-07-24 05:09:10.345351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.372157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.372190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:55.888 [2024-07-24 05:09:10.372205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.692 ms 00:19:55.888 [2024-07-24 05:09:10.372215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.398360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.398406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:55.888 [2024-07-24 05:09:10.398421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.088 ms 00:19:55.888 [2024-07-24 05:09:10.398431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.427747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.888 [2024-07-24 05:09:10.427799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:55.888 [2024-07-24 05:09:10.427814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.215 ms 00:19:55.888 [2024-07-24 05:09:10.427825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.888 [2024-07-24 05:09:10.427912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:55.888 [2024-07-24 05:09:10.427942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.427956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.427968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.427979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.427991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:55.888 [2024-07-24 05:09:10.428060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.428999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:55.889 [2024-07-24 05:09:10.429126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:55.890 [2024-07-24 05:09:10.429157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:55.890 [2024-07-24 05:09:10.429168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:55.890 [2024-07-24 05:09:10.429178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:55.890 [2024-07-24 05:09:10.429188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:55.890 [2024-07-24 05:09:10.429210] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:55.890 [2024-07-24 05:09:10.429220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:55.890 [2024-07-24 05:09:10.429229] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:55.890 [2024-07-24 05:09:10.429239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:55.890 [2024-07-24 05:09:10.429248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:55.890 [2024-07-24 05:09:10.429257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:55.890 [2024-07-24 05:09:10.429266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:55.890 [2024-07-24 05:09:10.429276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.890 [2024-07-24 05:09:10.429287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:55.890 [2024-07-24 05:09:10.429302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:19:55.890 [2024-07-24 05:09:10.429312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.444581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.890 [2024-07-24 05:09:10.444612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:55.890 [2024-07-24 05:09:10.444627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.245 ms 00:19:55.890 [2024-07-24 05:09:10.444638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.445122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.890 [2024-07-24 05:09:10.445156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:55.890 [2024-07-24 05:09:10.445171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:55.890 [2024-07-24 05:09:10.445197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.479147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.890 [2024-07-24 05:09:10.479191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.890 [2024-07-24 05:09:10.479205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.890 [2024-07-24 05:09:10.479216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.479387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.890 [2024-07-24 05:09:10.479409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.890 [2024-07-24 05:09:10.479421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.890 [2024-07-24 05:09:10.479433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.479494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.890 [2024-07-24 05:09:10.479512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.890 [2024-07-24 05:09:10.479525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.890 [2024-07-24 05:09:10.479536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.890 [2024-07-24 05:09:10.479560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.890 [2024-07-24 05:09:10.479575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.890 [2024-07-24 05:09:10.479593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.890 [2024-07-24 05:09:10.479604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.563861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.563931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.149 [2024-07-24 05:09:10.563949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.563960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.634795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.634886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.149 [2024-07-24 05:09:10.634905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.634916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.149 [2024-07-24 05:09:10.635057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.149 [2024-07-24 05:09:10.635173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.149 [2024-07-24 05:09:10.635387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.149 [2024-07-24 05:09:10.635487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.149 [2024-07-24 05:09:10.635609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.149 [2024-07-24 05:09:10.635706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.149 [2024-07-24 05:09:10.635718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.149 [2024-07-24 05:09:10.635734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.149 [2024-07-24 05:09:10.635920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.720 ms, result 0 00:19:57.085 00:19:57.085 00:19:57.085 05:09:11 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:57.085 05:09:11 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:57.652 05:09:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.652 [2024-07-24 05:09:12.148794] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:19:57.652 [2024-07-24 05:09:12.148976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80539 ] 00:19:57.910 [2024-07-24 05:09:12.306398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.910 [2024-07-24 05:09:12.476005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.168 [2024-07-24 05:09:12.749275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.168 [2024-07-24 05:09:12.749347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.428 [2024-07-24 05:09:12.907736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.907820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.429 [2024-07-24 05:09:12.907850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.429 [2024-07-24 05:09:12.907864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.910849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.910917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.429 [2024-07-24 05:09:12.910933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:19:58.429 [2024-07-24 05:09:12.910944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.911286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.429 [2024-07-24 05:09:12.912328] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.429 [2024-07-24 05:09:12.912390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.912404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.429 [2024-07-24 05:09:12.912415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:19:58.429 [2024-07-24 05:09:12.912425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.913722] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.429 [2024-07-24 05:09:12.928422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.928479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.429 [2024-07-24 05:09:12.928500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.701 ms 00:19:58.429 [2024-07-24 05:09:12.928511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.928635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.928692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.429 [2024-07-24 05:09:12.928707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:58.429 [2024-07-24 05:09:12.928717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.933101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.933155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.429 [2024-07-24 05:09:12.933184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:19:58.429 [2024-07-24 05:09:12.933193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.933305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.933325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.429 [2024-07-24 05:09:12.933367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:58.429 [2024-07-24 05:09:12.933392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.933433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.933448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.429 [2024-07-24 05:09:12.933463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.429 [2024-07-24 05:09:12.933473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.933502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.429 [2024-07-24 05:09:12.937279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.937329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.429 [2024-07-24 05:09:12.937342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:19:58.429 [2024-07-24 05:09:12.937352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.937417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.937434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.429 [2024-07-24 05:09:12.937445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:58.429 [2024-07-24 05:09:12.937454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.937492] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.429 [2024-07-24 05:09:12.937549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.429 [2024-07-24 05:09:12.937592] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.429 [2024-07-24 05:09:12.937611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:58.429 [2024-07-24 05:09:12.937706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.429 [2024-07-24 05:09:12.937731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.429 [2024-07-24 05:09:12.937747] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:58.429 [2024-07-24 05:09:12.937761] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.429 [2024-07-24 05:09:12.937773] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.429 [2024-07-24 05:09:12.937790] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.429 [2024-07-24 05:09:12.937800] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.429 [2024-07-24 05:09:12.937810] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.429 [2024-07-24 05:09:12.937820] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.429 [2024-07-24 05:09:12.937830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.937854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.429 [2024-07-24 05:09:12.937868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:19:58.429 [2024-07-24 05:09:12.937878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.937969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.429 [2024-07-24 05:09:12.937991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.429 [2024-07-24 05:09:12.938009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:58.429 [2024-07-24 05:09:12.938019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.429 [2024-07-24 05:09:12.938120] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.429 [2024-07-24 05:09:12.938136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.429 [2024-07-24 05:09:12.938147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.429 [2024-07-24 05:09:12.938178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.429 [2024-07-24 05:09:12.938207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.429 [2024-07-24 05:09:12.938225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.429 [2024-07-24 05:09:12.938235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.429 [2024-07-24 05:09:12.938245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.429 [2024-07-24 05:09:12.938254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.429 [2024-07-24 05:09:12.938264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.429 [2024-07-24 05:09:12.938273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.429 [2024-07-24 05:09:12.938292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.429 [2024-07-24 05:09:12.938335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.429 [2024-07-24 05:09:12.938366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.429 [2024-07-24 05:09:12.938395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.429 [2024-07-24 05:09:12.938424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.429 [2024-07-24 05:09:12.938442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.429 [2024-07-24 05:09:12.938452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.429 [2024-07-24 05:09:12.938461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.429 [2024-07-24 05:09:12.938470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.429 [2024-07-24 05:09:12.938479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.429 [2024-07-24 05:09:12.938488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.429 [2024-07-24 05:09:12.938498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.429 [2024-07-24 05:09:12.938507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.429 [2024-07-24 05:09:12.938516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.430 [2024-07-24 05:09:12.938526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.430 [2024-07-24 05:09:12.938535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.430 [2024-07-24 05:09:12.938544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.430 [2024-07-24 05:09:12.938553] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.430 [2024-07-24 05:09:12.938564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.430 [2024-07-24 05:09:12.938574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.430 [2024-07-24 05:09:12.938584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.430 [2024-07-24 05:09:12.938598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.430 [2024-07-24 05:09:12.938608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.430 [2024-07-24 05:09:12.938618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.430 [2024-07-24 05:09:12.938628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.430 [2024-07-24 05:09:12.938637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.430 [2024-07-24 05:09:12.938646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.430 [2024-07-24 05:09:12.938658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.430 [2024-07-24 05:09:12.938671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.430 [2024-07-24 05:09:12.938694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.430 [2024-07-24 05:09:12.938704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.430 [2024-07-24 05:09:12.938714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.430 [2024-07-24 05:09:12.938724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.430 [2024-07-24 05:09:12.938735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.430 [2024-07-24 05:09:12.938745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.430 [2024-07-24 05:09:12.938755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.430 [2024-07-24 05:09:12.938765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.430 [2024-07-24 05:09:12.938776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.430 [2024-07-24 05:09:12.938827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.430 [2024-07-24 05:09:12.938852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.430 [2024-07-24 05:09:12.938894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.430 [2024-07-24 05:09:12.938905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.430 [2024-07-24 05:09:12.938916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.430 [2024-07-24 05:09:12.938928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:12.938939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.430 [2024-07-24 05:09:12.938950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:19:58.430 [2024-07-24 05:09:12.938961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:12.975832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:12.975917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.430 [2024-07-24 05:09:12.975942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.777 ms 00:19:58.430 [2024-07-24 05:09:12.975952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:12.976188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:12.976219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.430 [2024-07-24 05:09:12.976239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:58.430 [2024-07-24 05:09:12.976250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.009244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.009312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.430 [2024-07-24 05:09:13.009329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.961 ms 00:19:58.430 [2024-07-24 05:09:13.009340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.009503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.009553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.430 [2024-07-24 05:09:13.009566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:58.430 [2024-07-24 05:09:13.009577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.009924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.009958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.430 [2024-07-24 05:09:13.009972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:58.430 [2024-07-24 05:09:13.009983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.010169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.010189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.430 [2024-07-24 05:09:13.010201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:58.430 [2024-07-24 05:09:13.010212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.024956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.024996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.430 [2024-07-24 05:09:13.025012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.715 ms 00:19:58.430 [2024-07-24 05:09:13.025024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.430 [2024-07-24 05:09:13.039036] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:58.430 [2024-07-24 05:09:13.039093] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:58.430 [2024-07-24 05:09:13.039109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.430 [2024-07-24 05:09:13.039120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:58.430 [2024-07-24 05:09:13.039131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:19:58.430 [2024-07-24 05:09:13.039141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.066694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.066749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:58.689 [2024-07-24 05:09:13.066765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.467 ms 00:19:58.689 [2024-07-24 05:09:13.066776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.080339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.080393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:58.689 [2024-07-24 05:09:13.080407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.461 ms 00:19:58.689 [2024-07-24 05:09:13.080416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.093339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.093401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:58.689 [2024-07-24 05:09:13.093415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:19:58.689 [2024-07-24 05:09:13.093425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.094260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.094308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.689 [2024-07-24 05:09:13.094322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:19:58.689 [2024-07-24 05:09:13.094332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.160279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.160333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:58.689 [2024-07-24 05:09:13.160350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.917 ms 00:19:58.689 [2024-07-24 05:09:13.160361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.172589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:58.689 [2024-07-24 05:09:13.186095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.186161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:58.689 [2024-07-24 05:09:13.186181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.548 ms 00:19:58.689 [2024-07-24 05:09:13.186192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.186346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.186365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:58.689 [2024-07-24 05:09:13.186392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:58.689 [2024-07-24 05:09:13.186416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.186499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.186514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:58.689 [2024-07-24 05:09:13.186525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:58.689 [2024-07-24 05:09:13.186535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.186566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.186585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.689 [2024-07-24 05:09:13.186595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:58.689 [2024-07-24 05:09:13.186605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.186640] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:58.689 [2024-07-24 05:09:13.186665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.186677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:58.689 [2024-07-24 05:09:13.186688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:58.689 [2024-07-24 05:09:13.186698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.213730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.213775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.689 [2024-07-24 05:09:13.213790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.002 ms 00:19:58.689 [2024-07-24 05:09:13.213801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.214011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.689 [2024-07-24 05:09:13.214041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.689 [2024-07-24 05:09:13.214055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:58.689 [2024-07-24 05:09:13.214066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.689 [2024-07-24 05:09:13.215228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.689 [2024-07-24 05:09:13.218845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.043 ms, result 0 00:19:58.689 [2024-07-24 05:09:13.219747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.689 [2024-07-24 05:09:13.234555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.955  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-24 05:09:13.419998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.955 [2024-07-24 05:09:13.430988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.431030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.955 [2024-07-24 05:09:13.431047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.955 [2024-07-24 05:09:13.431058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.431092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:58.955 [2024-07-24 05:09:13.434144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.434175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.955 [2024-07-24 05:09:13.434188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:19:58.955 [2024-07-24 05:09:13.434198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.435999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.436037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.955 [2024-07-24 05:09:13.436051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:19:58.955 [2024-07-24 05:09:13.436077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.439647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.439684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.955 [2024-07-24 05:09:13.439705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:19:58.955 [2024-07-24 05:09:13.439716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.446305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.446336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.955 [2024-07-24 05:09:13.446349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:19:58.955 [2024-07-24 05:09:13.446359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.473533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.473571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.955 [2024-07-24 05:09:13.473586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.111 ms 00:19:58.955 [2024-07-24 05:09:13.473596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.489577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.489617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.955 [2024-07-24 05:09:13.489632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.922 ms 00:19:58.955 [2024-07-24 05:09:13.489648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.489826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.489864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.955 [2024-07-24 05:09:13.489878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:58.955 [2024-07-24 05:09:13.489889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.517885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.517923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:58.955 [2024-07-24 05:09:13.517938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.974 ms 00:19:58.955 [2024-07-24 05:09:13.517948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.546387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.546426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:58.955 [2024-07-24 05:09:13.546442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.339 ms 00:19:58.955 [2024-07-24 05:09:13.546452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.955 [2024-07-24 05:09:13.578383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.955 [2024-07-24 05:09:13.578426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.955 [2024-07-24 05:09:13.578441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.873 ms 00:19:58.955 [2024-07-24 05:09:13.578452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.213 [2024-07-24 05:09:13.607794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.213 [2024-07-24 05:09:13.607836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.213 [2024-07-24 05:09:13.607861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.256 ms 00:19:59.213 [2024-07-24 05:09:13.607872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.213 [2024-07-24 05:09:13.607932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.213 [2024-07-24 05:09:13.607969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.213 [2024-07-24 05:09:13.607998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.213 [2024-07-24 05:09:13.608009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.213 [2024-07-24 05:09:13.608020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.608994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.214 [2024-07-24 05:09:13.609005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.215 [2024-07-24 05:09:13.609100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.215 [2024-07-24 05:09:13.609125] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:19:59.215 [2024-07-24 05:09:13.609137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.215 [2024-07-24 05:09:13.609147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.215 [2024-07-24 05:09:13.609171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.215 [2024-07-24 05:09:13.609182] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.215 [2024-07-24 05:09:13.609191] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.215 [2024-07-24 05:09:13.609201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.215 [2024-07-24 05:09:13.609211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.215 [2024-07-24 05:09:13.609220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.215 [2024-07-24 05:09:13.609228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.215 [2024-07-24 05:09:13.609238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.215 [2024-07-24 05:09:13.609249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.215 [2024-07-24 05:09:13.609264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:19:59.215 [2024-07-24 05:09:13.609275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.624214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.215 [2024-07-24 05:09:13.624249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.215 [2024-07-24 05:09:13.624264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.915 ms 00:19:59.215 [2024-07-24 05:09:13.624273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.624729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.215 [2024-07-24 05:09:13.624759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.215 [2024-07-24 05:09:13.624773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:19:59.215 [2024-07-24 05:09:13.624783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.660067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.660133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.215 [2024-07-24 05:09:13.660148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.660159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.660278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.660310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.215 [2024-07-24 05:09:13.660321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.660331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.660394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.660412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.215 [2024-07-24 05:09:13.660423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.660433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.660455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.660480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.215 [2024-07-24 05:09:13.660491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.660501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.745930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.745995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.215 [2024-07-24 05:09:13.746013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.746024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.215 [2024-07-24 05:09:13.822137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.215 [2024-07-24 05:09:13.822256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.215 [2024-07-24 05:09:13.822318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.215 [2024-07-24 05:09:13.822501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:59.215 [2024-07-24 05:09:13.822588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.215 [2024-07-24 05:09:13.822680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.215 [2024-07-24 05:09:13.822755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.215 [2024-07-24 05:09:13.822766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.215 [2024-07-24 05:09:13.822782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.215 [2024-07-24 05:09:13.822959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.962 ms, result 0 00:20:00.151 00:20:00.151 00:20:00.151 05:09:14 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80574 00:20:00.151 05:09:14 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:00.151 05:09:14 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80574 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80574 ']' 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.151 05:09:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:00.410 [2024-07-24 05:09:14.879846] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:20:00.410 [2024-07-24 05:09:14.880014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80574 ] 00:20:00.410 [2024-07-24 05:09:15.040096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.669 [2024-07-24 05:09:15.215155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.236 05:09:15 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.236 05:09:15 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:20:01.236 05:09:15 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:01.495 [2024-07-24 05:09:16.084832] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.495 [2024-07-24 05:09:16.084934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.755 [2024-07-24 05:09:16.260766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.755 [2024-07-24 05:09:16.260833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.755 [2024-07-24 05:09:16.260863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.755 [2024-07-24 05:09:16.260878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.755 [2024-07-24 05:09:16.263734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.755 [2024-07-24 05:09:16.263807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.755 [2024-07-24 05:09:16.263823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:20:01.755 [2024-07-24 05:09:16.263837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.755 [2024-07-24 05:09:16.263979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.755 [2024-07-24 05:09:16.264919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.755 [2024-07-24 05:09:16.264989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.755 [2024-07-24 05:09:16.265005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.755 [2024-07-24 05:09:16.265018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:20:01.755 [2024-07-24 05:09:16.265034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.755 [2024-07-24 05:09:16.266388] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:01.755 [2024-07-24 05:09:16.280909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.755 [2024-07-24 05:09:16.280979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:01.756 [2024-07-24 05:09:16.280999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.517 ms 00:20:01.756 [2024-07-24 05:09:16.281012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.281124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.281146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:01.756 [2024-07-24 05:09:16.281161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:01.756 [2024-07-24 05:09:16.281171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.285524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.285582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.756 [2024-07-24 05:09:16.285604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:20:01.756 [2024-07-24 05:09:16.285617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.285762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.285783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.756 [2024-07-24 05:09:16.285798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:01.756 [2024-07-24 05:09:16.285828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.285915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.285933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.756 [2024-07-24 05:09:16.285948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:01.756 [2024-07-24 05:09:16.285960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.285997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.756 [2024-07-24 05:09:16.290025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.290065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.756 [2024-07-24 05:09:16.290080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.041 ms 00:20:01.756 [2024-07-24 05:09:16.290094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.290154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.290178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.756 [2024-07-24 05:09:16.290193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:01.756 [2024-07-24 05:09:16.290205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.290232] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.756 [2024-07-24 05:09:16.290293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:01.756 [2024-07-24 05:09:16.290342] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.756 [2024-07-24 05:09:16.290369] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:01.756 [2024-07-24 05:09:16.290470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.756 [2024-07-24 05:09:16.290494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.756 [2024-07-24 05:09:16.290509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:01.756 [2024-07-24 05:09:16.290526] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.756 [2024-07-24 05:09:16.290540] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.756 [2024-07-24 05:09:16.290556] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.756 [2024-07-24 05:09:16.290567] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.756 [2024-07-24 05:09:16.290579] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.756 [2024-07-24 05:09:16.290590] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.756 [2024-07-24 05:09:16.290606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.290617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.756 [2024-07-24 05:09:16.290631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:20:01.756 [2024-07-24 05:09:16.290644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.290739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.756 [2024-07-24 05:09:16.290753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.756 [2024-07-24 05:09:16.290768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:01.756 [2024-07-24 05:09:16.290779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.756 [2024-07-24 05:09:16.290915] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.756 [2024-07-24 05:09:16.290941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.756 [2024-07-24 05:09:16.290957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.756 [2024-07-24 05:09:16.290969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.290987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.756 [2024-07-24 05:09:16.290998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.756 [2024-07-24 05:09:16.291036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.756 [2024-07-24 05:09:16.291059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.756 [2024-07-24 05:09:16.291070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.756 [2024-07-24 05:09:16.291081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.756 [2024-07-24 05:09:16.291091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.756 [2024-07-24 05:09:16.291104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:01.756 [2024-07-24 05:09:16.291114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.756 [2024-07-24 05:09:16.291136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.756 [2024-07-24 05:09:16.291171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.756 [2024-07-24 05:09:16.291204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.756 [2024-07-24 05:09:16.291240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.756 [2024-07-24 05:09:16.291333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.756 [2024-07-24 05:09:16.291372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.756 [2024-07-24 05:09:16.291395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.756 [2024-07-24 05:09:16.291407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:01.756 [2024-07-24 05:09:16.291419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.756 [2024-07-24 05:09:16.291430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.756 [2024-07-24 05:09:16.291443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:01.756 [2024-07-24 05:09:16.291454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.756 [2024-07-24 05:09:16.291480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:01.756 [2024-07-24 05:09:16.291493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291504] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.756 [2024-07-24 05:09:16.291518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.756 [2024-07-24 05:09:16.291530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.756 [2024-07-24 05:09:16.291555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.756 [2024-07-24 05:09:16.291568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.756 [2024-07-24 05:09:16.291579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.756 [2024-07-24 05:09:16.291593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.756 [2024-07-24 05:09:16.291618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.756 [2024-07-24 05:09:16.291631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.756 [2024-07-24 05:09:16.291644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.756 [2024-07-24 05:09:16.291660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.757 [2024-07-24 05:09:16.291704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:01.757 [2024-07-24 05:09:16.291716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.757 [2024-07-24 05:09:16.291729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:01.757 [2024-07-24 05:09:16.291741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:01.757 [2024-07-24 05:09:16.291754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:01.757 [2024-07-24 05:09:16.291765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:01.757 [2024-07-24 05:09:16.291778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:01.757 [2024-07-24 05:09:16.291790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:01.757 [2024-07-24 05:09:16.291802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:01.757 [2024-07-24 05:09:16.291863] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.757 [2024-07-24 05:09:16.291877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.757 [2024-07-24 05:09:16.291935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.757 [2024-07-24 05:09:16.291949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.757 [2024-07-24 05:09:16.291963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.757 [2024-07-24 05:09:16.291976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.291989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.757 [2024-07-24 05:09:16.292001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:20:01.757 [2024-07-24 05:09:16.292018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.322560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.322634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.757 [2024-07-24 05:09:16.322659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.438 ms 00:20:01.757 [2024-07-24 05:09:16.322672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.322853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.322876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.757 [2024-07-24 05:09:16.322890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:01.757 [2024-07-24 05:09:16.322918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.355501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.355575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.757 [2024-07-24 05:09:16.355594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.523 ms 00:20:01.757 [2024-07-24 05:09:16.355622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.355769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.355791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.757 [2024-07-24 05:09:16.355805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.757 [2024-07-24 05:09:16.355817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.356196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.356238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.757 [2024-07-24 05:09:16.356254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:01.757 [2024-07-24 05:09:16.356267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.356410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.356441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.757 [2024-07-24 05:09:16.356456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:01.757 [2024-07-24 05:09:16.356469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.757 [2024-07-24 05:09:16.371851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.757 [2024-07-24 05:09:16.371932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.757 [2024-07-24 05:09:16.371950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.353 ms 00:20:01.757 [2024-07-24 05:09:16.371963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.388106] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:02.016 [2024-07-24 05:09:16.388181] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.016 [2024-07-24 05:09:16.388202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.388217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.016 [2024-07-24 05:09:16.388230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.099 ms 00:20:02.016 [2024-07-24 05:09:16.388243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.416851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.416935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.016 [2024-07-24 05:09:16.416954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.515 ms 00:20:02.016 [2024-07-24 05:09:16.416972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.431309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.431433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.016 [2024-07-24 05:09:16.431470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.244 ms 00:20:02.016 [2024-07-24 05:09:16.431489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.445101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.445157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.016 [2024-07-24 05:09:16.445172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:20:02.016 [2024-07-24 05:09:16.445184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.445993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.446058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.016 [2024-07-24 05:09:16.446072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:20:02.016 [2024-07-24 05:09:16.446086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.518235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.518324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.016 [2024-07-24 05:09:16.518345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.119 ms 00:20:02.016 [2024-07-24 05:09:16.518358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.530693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.016 [2024-07-24 05:09:16.544863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.544952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.016 [2024-07-24 05:09:16.544979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.339 ms 00:20:02.016 [2024-07-24 05:09:16.544991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.545154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.545173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.016 [2024-07-24 05:09:16.545188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.016 [2024-07-24 05:09:16.545198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.016 [2024-07-24 05:09:16.545316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.016 [2024-07-24 05:09:16.545348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.016 [2024-07-24 05:09:16.545365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:02.016 [2024-07-24 05:09:16.545377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.017 [2024-07-24 05:09:16.545412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.017 [2024-07-24 05:09:16.545427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.017 [2024-07-24 05:09:16.545442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:02.017 [2024-07-24 05:09:16.545453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.017 [2024-07-24 05:09:16.545495] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.017 [2024-07-24 05:09:16.545524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.017 [2024-07-24 05:09:16.545541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.017 [2024-07-24 05:09:16.545555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:02.017 [2024-07-24 05:09:16.545571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.017 [2024-07-24 05:09:16.576209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.017 [2024-07-24 05:09:16.576269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.017 [2024-07-24 05:09:16.576286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.607 ms 00:20:02.017 [2024-07-24 05:09:16.576301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.017 [2024-07-24 05:09:16.576433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.017 [2024-07-24 05:09:16.576461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.017 [2024-07-24 05:09:16.576507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:02.017 [2024-07-24 05:09:16.576535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.017 [2024-07-24 05:09:16.577650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.017 [2024-07-24 05:09:16.581858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.501 ms, result 0 00:20:02.017 [2024-07-24 05:09:16.583165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.017 Some configs were skipped because the RPC state that can call them passed over. 00:20:02.017 05:09:16 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:02.275 [2024-07-24 05:09:16.824374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.275 [2024-07-24 05:09:16.824434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:02.275 [2024-07-24 05:09:16.824460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:20:02.275 [2024-07-24 05:09:16.824473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.275 [2024-07-24 05:09:16.824524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.866 ms, result 0 00:20:02.275 true 00:20:02.275 05:09:16 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:02.535 [2024-07-24 05:09:17.084213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.535 [2024-07-24 05:09:17.084293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:02.535 [2024-07-24 05:09:17.084314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:20:02.535 [2024-07-24 05:09:17.084328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.535 [2024-07-24 05:09:17.084392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.450 ms, result 0 00:20:02.535 true 00:20:02.535 05:09:17 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80574 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80574 ']' 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80574 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80574 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.535 killing process with pid 80574 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80574' 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80574 00:20:02.535 05:09:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80574 00:20:03.473 [2024-07-24 05:09:17.967327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.967418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.473 [2024-07-24 05:09:17.967442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:03.473 [2024-07-24 05:09:17.967456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:17.967496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:03.473 [2024-07-24 05:09:17.970417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.970485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.473 [2024-07-24 05:09:17.970501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:20:03.473 [2024-07-24 05:09:17.970515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:17.970841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.970908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.473 [2024-07-24 05:09:17.970925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:03.473 [2024-07-24 05:09:17.970940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:17.974890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.974952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.473 [2024-07-24 05:09:17.974971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:20:03.473 [2024-07-24 05:09:17.974986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:17.982000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.982053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.473 [2024-07-24 05:09:17.982068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.968 ms 00:20:03.473 [2024-07-24 05:09:17.982083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:17.993219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:17.993276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.473 [2024-07-24 05:09:17.993292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.083 ms 00:20:03.473 [2024-07-24 05:09:17.993307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.001419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:18.001482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.473 [2024-07-24 05:09:18.001498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.070 ms 00:20:03.473 [2024-07-24 05:09:18.001511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.001649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:18.001671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.473 [2024-07-24 05:09:18.001684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:03.473 [2024-07-24 05:09:18.001738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.013177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:18.013232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:03.473 [2024-07-24 05:09:18.013247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.416 ms 00:20:03.473 [2024-07-24 05:09:18.013259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.024358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:18.024412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:03.473 [2024-07-24 05:09:18.024426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.058 ms 00:20:03.473 [2024-07-24 05:09:18.024442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.035140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.473 [2024-07-24 05:09:18.035194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.473 [2024-07-24 05:09:18.035208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.658 ms 00:20:03.473 [2024-07-24 05:09:18.035219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.473 [2024-07-24 05:09:18.045915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.474 [2024-07-24 05:09:18.045969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.474 [2024-07-24 05:09:18.045983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.594 ms 00:20:03.474 [2024-07-24 05:09:18.045995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.474 [2024-07-24 05:09:18.046034] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.474 [2024-07-24 05:09:18.046059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.046998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.474 [2024-07-24 05:09:18.047161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.475 [2024-07-24 05:09:18.047411] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.475 [2024-07-24 05:09:18.047424] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:20:03.475 [2024-07-24 05:09:18.047439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.475 [2024-07-24 05:09:18.047450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.475 [2024-07-24 05:09:18.047464] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.475 [2024-07-24 05:09:18.047475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.475 [2024-07-24 05:09:18.047488] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.475 [2024-07-24 05:09:18.047500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.475 [2024-07-24 05:09:18.047514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.475 [2024-07-24 05:09:18.047525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.475 [2024-07-24 05:09:18.047549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.475 [2024-07-24 05:09:18.047561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.475 [2024-07-24 05:09:18.047574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.475 [2024-07-24 05:09:18.047601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:20:03.475 [2024-07-24 05:09:18.047628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.475 [2024-07-24 05:09:18.062253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.475 [2024-07-24 05:09:18.062309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.475 [2024-07-24 05:09:18.062325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.583 ms 00:20:03.475 [2024-07-24 05:09:18.062340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.475 [2024-07-24 05:09:18.062825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.475 [2024-07-24 05:09:18.062876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.475 [2024-07-24 05:09:18.062894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:20:03.475 [2024-07-24 05:09:18.062908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.734 [2024-07-24 05:09:18.110995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.734 [2024-07-24 05:09:18.111064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.734 [2024-07-24 05:09:18.111081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.734 [2024-07-24 05:09:18.111094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.734 [2024-07-24 05:09:18.111214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.734 [2024-07-24 05:09:18.111235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.734 [2024-07-24 05:09:18.111250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.734 [2024-07-24 05:09:18.111271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.734 [2024-07-24 05:09:18.111353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.734 [2024-07-24 05:09:18.111376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.734 [2024-07-24 05:09:18.111390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.734 [2024-07-24 05:09:18.111407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.111433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.111450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.735 [2024-07-24 05:09:18.111462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.111480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.196273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.196360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.735 [2024-07-24 05:09:18.196380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.196392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.735 [2024-07-24 05:09:18.277272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.735 [2024-07-24 05:09:18.277471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.735 [2024-07-24 05:09:18.277550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.735 [2024-07-24 05:09:18.277736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.735 [2024-07-24 05:09:18.277883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.277951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.277969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.735 [2024-07-24 05:09:18.277982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.277997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.278050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.735 [2024-07-24 05:09:18.278071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.735 [2024-07-24 05:09:18.278084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.735 [2024-07-24 05:09:18.278097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.735 [2024-07-24 05:09:18.278272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.930 ms, result 0 00:20:04.672 05:09:19 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:04.672 [2024-07-24 05:09:19.195838] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:20:04.672 [2024-07-24 05:09:19.196015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80628 ] 00:20:04.931 [2024-07-24 05:09:19.353784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.931 [2024-07-24 05:09:19.521559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.189 [2024-07-24 05:09:19.812665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.189 [2024-07-24 05:09:19.812769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.449 [2024-07-24 05:09:19.971694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.971777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:05.449 [2024-07-24 05:09:19.971796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:05.449 [2024-07-24 05:09:19.971807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.974651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.974706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.449 [2024-07-24 05:09:19.974721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:20:05.449 [2024-07-24 05:09:19.974731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.974885] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:05.449 [2024-07-24 05:09:19.975935] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:05.449 [2024-07-24 05:09:19.975990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.976004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.449 [2024-07-24 05:09:19.976015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:20:05.449 [2024-07-24 05:09:19.976026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.977324] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:05.449 [2024-07-24 05:09:19.991384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.991427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:05.449 [2024-07-24 05:09:19.991449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.061 ms 00:20:05.449 [2024-07-24 05:09:19.991460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.991586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.991622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:05.449 [2024-07-24 05:09:19.991650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:05.449 [2024-07-24 05:09:19.991676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.996037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.996090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.449 [2024-07-24 05:09:19.996104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.292 ms 00:20:05.449 [2024-07-24 05:09:19.996114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.996220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.996239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.449 [2024-07-24 05:09:19.996251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:05.449 [2024-07-24 05:09:19.996261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.996299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:19.996329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:05.449 [2024-07-24 05:09:19.996359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:05.449 [2024-07-24 05:09:19.996384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:19.996417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:05.449 [2024-07-24 05:09:20.000178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:20.000227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.449 [2024-07-24 05:09:20.000241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:20:05.449 [2024-07-24 05:09:20.000251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:20.000313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:20.000331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:05.449 [2024-07-24 05:09:20.000343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:05.449 [2024-07-24 05:09:20.000353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:20.000376] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:05.449 [2024-07-24 05:09:20.000400] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:05.449 [2024-07-24 05:09:20.000455] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:05.449 [2024-07-24 05:09:20.000505] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:05.449 [2024-07-24 05:09:20.000601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:05.449 [2024-07-24 05:09:20.000617] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:05.449 [2024-07-24 05:09:20.000631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:05.449 [2024-07-24 05:09:20.000644] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:05.449 [2024-07-24 05:09:20.000657] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:05.449 [2024-07-24 05:09:20.000672] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:05.449 [2024-07-24 05:09:20.000683] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:05.449 [2024-07-24 05:09:20.000693] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:05.449 [2024-07-24 05:09:20.000703] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:05.449 [2024-07-24 05:09:20.000714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:20.000724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:05.449 [2024-07-24 05:09:20.000735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:20:05.449 [2024-07-24 05:09:20.000745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:20.000835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.449 [2024-07-24 05:09:20.000849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:05.449 [2024-07-24 05:09:20.000865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:05.449 [2024-07-24 05:09:20.000875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.449 [2024-07-24 05:09:20.000991] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:05.449 [2024-07-24 05:09:20.001019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:05.449 [2024-07-24 05:09:20.001032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:05.449 [2024-07-24 05:09:20.001063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:05.449 [2024-07-24 05:09:20.001109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.449 [2024-07-24 05:09:20.001128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:05.449 [2024-07-24 05:09:20.001137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:05.449 [2024-07-24 05:09:20.001147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.449 [2024-07-24 05:09:20.001157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:05.449 [2024-07-24 05:09:20.001167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:05.449 [2024-07-24 05:09:20.001177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:05.449 [2024-07-24 05:09:20.001196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:05.449 [2024-07-24 05:09:20.001241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:05.449 [2024-07-24 05:09:20.001271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:05.449 [2024-07-24 05:09:20.001300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:05.449 [2024-07-24 05:09:20.001310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.449 [2024-07-24 05:09:20.001320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:05.450 [2024-07-24 05:09:20.001329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:05.450 [2024-07-24 05:09:20.001339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.450 [2024-07-24 05:09:20.001348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:05.450 [2024-07-24 05:09:20.001374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:05.450 [2024-07-24 05:09:20.001384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.450 [2024-07-24 05:09:20.001394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:05.450 [2024-07-24 05:09:20.001419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:05.450 [2024-07-24 05:09:20.001430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.450 [2024-07-24 05:09:20.001455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:05.450 [2024-07-24 05:09:20.001465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:05.450 [2024-07-24 05:09:20.001475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.450 [2024-07-24 05:09:20.001485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:05.450 [2024-07-24 05:09:20.001495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:05.450 [2024-07-24 05:09:20.001504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.450 [2024-07-24 05:09:20.001514] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:05.450 [2024-07-24 05:09:20.001524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:05.450 [2024-07-24 05:09:20.001535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.450 [2024-07-24 05:09:20.001546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.450 [2024-07-24 05:09:20.001562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:05.450 [2024-07-24 05:09:20.001572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:05.450 [2024-07-24 05:09:20.001582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:05.450 [2024-07-24 05:09:20.001592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:05.450 [2024-07-24 05:09:20.001602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:05.450 [2024-07-24 05:09:20.001613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:05.450 [2024-07-24 05:09:20.001624] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:05.450 [2024-07-24 05:09:20.001637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:05.450 [2024-07-24 05:09:20.001661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:05.450 [2024-07-24 05:09:20.001672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:05.450 [2024-07-24 05:09:20.001683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:05.450 [2024-07-24 05:09:20.001695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:05.450 [2024-07-24 05:09:20.001706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:05.450 [2024-07-24 05:09:20.001716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:05.450 [2024-07-24 05:09:20.001742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:05.450 [2024-07-24 05:09:20.001753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:05.450 [2024-07-24 05:09:20.001764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:05.450 [2024-07-24 05:09:20.001818] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:05.450 [2024-07-24 05:09:20.001829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:05.450 [2024-07-24 05:09:20.001852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:05.450 [2024-07-24 05:09:20.001862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:05.450 [2024-07-24 05:09:20.001873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:05.450 [2024-07-24 05:09:20.001900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.001911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:05.450 [2024-07-24 05:09:20.001923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:20:05.450 [2024-07-24 05:09:20.001948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.038436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.038515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.450 [2024-07-24 05:09:20.038535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.393 ms 00:20:05.450 [2024-07-24 05:09:20.038546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.038732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.038772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:05.450 [2024-07-24 05:09:20.038785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:05.450 [2024-07-24 05:09:20.038810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.073184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.073253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.450 [2024-07-24 05:09:20.073272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.315 ms 00:20:05.450 [2024-07-24 05:09:20.073287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.073427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.073445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.450 [2024-07-24 05:09:20.073457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:05.450 [2024-07-24 05:09:20.073467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.073836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.073894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.450 [2024-07-24 05:09:20.073910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:20:05.450 [2024-07-24 05:09:20.073927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.450 [2024-07-24 05:09:20.074186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.450 [2024-07-24 05:09:20.074219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.450 [2024-07-24 05:09:20.074234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:05.450 [2024-07-24 05:09:20.074244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.089576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.089629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.711 [2024-07-24 05:09:20.089644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.300 ms 00:20:05.711 [2024-07-24 05:09:20.089655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.104062] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:05.711 [2024-07-24 05:09:20.104117] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:05.711 [2024-07-24 05:09:20.104148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.104159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:05.711 [2024-07-24 05:09:20.104171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.350 ms 00:20:05.711 [2024-07-24 05:09:20.104180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.129517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.129570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:05.711 [2024-07-24 05:09:20.129585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.251 ms 00:20:05.711 [2024-07-24 05:09:20.129611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.143229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.143317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:05.711 [2024-07-24 05:09:20.143332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.528 ms 00:20:05.711 [2024-07-24 05:09:20.143343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.156930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.156981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:05.711 [2024-07-24 05:09:20.156995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.495 ms 00:20:05.711 [2024-07-24 05:09:20.157005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.157816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.157885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.711 [2024-07-24 05:09:20.157901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:20:05.711 [2024-07-24 05:09:20.157912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.221991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.222058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:05.711 [2024-07-24 05:09:20.222076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.046 ms 00:20:05.711 [2024-07-24 05:09:20.222087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.233081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:05.711 [2024-07-24 05:09:20.245569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.245647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:05.711 [2024-07-24 05:09:20.245666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.332 ms 00:20:05.711 [2024-07-24 05:09:20.245676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.245825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.245909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:05.711 [2024-07-24 05:09:20.245924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:05.711 [2024-07-24 05:09:20.245949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.246018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.246044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.711 [2024-07-24 05:09:20.246058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:05.711 [2024-07-24 05:09:20.246069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.246101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.246131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.711 [2024-07-24 05:09:20.246144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:05.711 [2024-07-24 05:09:20.246155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.246193] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:05.711 [2024-07-24 05:09:20.246219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.246233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:05.711 [2024-07-24 05:09:20.246246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:05.711 [2024-07-24 05:09:20.246258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.711 [2024-07-24 05:09:20.273210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.711 [2024-07-24 05:09:20.273298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.711 [2024-07-24 05:09:20.273315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.916 ms 00:20:05.712 [2024-07-24 05:09:20.273326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.712 [2024-07-24 05:09:20.273498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.712 [2024-07-24 05:09:20.273527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.712 [2024-07-24 05:09:20.273541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:05.712 [2024-07-24 05:09:20.273553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.712 [2024-07-24 05:09:20.274732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:05.712 [2024-07-24 05:09:20.278531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.658 ms, result 0 00:20:05.712 [2024-07-24 05:09:20.279420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.712 [2024-07-24 05:09:20.295432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.110  Copying: 24/256 [MB] (24 MBps) Copying: 46/256 [MB] (21 MBps) Copying: 68/256 [MB] (21 MBps) Copying: 89/256 [MB] (21 MBps) Copying: 111/256 [MB] (21 MBps) Copying: 132/256 [MB] (21 MBps) Copying: 153/256 [MB] (21 MBps) Copying: 175/256 [MB] (21 MBps) Copying: 197/256 [MB] (22 MBps) Copying: 218/256 [MB] (21 MBps) Copying: 239/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-24 05:09:32.540563] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.110 [2024-07-24 05:09:32.560791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.560888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.110 [2024-07-24 05:09:32.560914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.110 [2024-07-24 05:09:32.560935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.560972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.110 [2024-07-24 05:09:32.564414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.564463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.110 [2024-07-24 05:09:32.564477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:20:18.110 [2024-07-24 05:09:32.564487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.564768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.564798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.110 [2024-07-24 05:09:32.564811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:18.110 [2024-07-24 05:09:32.564821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.568647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.568693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.110 [2024-07-24 05:09:32.568707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:20:18.110 [2024-07-24 05:09:32.568717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.575863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.575929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:18.110 [2024-07-24 05:09:32.575944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.122 ms 00:20:18.110 [2024-07-24 05:09:32.575955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.602722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.602779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.110 [2024-07-24 05:09:32.602794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.697 ms 00:20:18.110 [2024-07-24 05:09:32.602804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.618230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.618283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.110 [2024-07-24 05:09:32.618305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.354 ms 00:20:18.110 [2024-07-24 05:09:32.618316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.618463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.618484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.110 [2024-07-24 05:09:32.618506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:18.110 [2024-07-24 05:09:32.618516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.645215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.645267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:18.110 [2024-07-24 05:09:32.645282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.661 ms 00:20:18.110 [2024-07-24 05:09:32.645292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.671673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.671741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:18.110 [2024-07-24 05:09:32.671755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.302 ms 00:20:18.110 [2024-07-24 05:09:32.671765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.697578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.697632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.110 [2024-07-24 05:09:32.697647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.754 ms 00:20:18.110 [2024-07-24 05:09:32.697657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.724092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.110 [2024-07-24 05:09:32.724130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.110 [2024-07-24 05:09:32.724145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.333 ms 00:20:18.110 [2024-07-24 05:09:32.724154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.110 [2024-07-24 05:09:32.724235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.110 [2024-07-24 05:09:32.724259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.724990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.110 [2024-07-24 05:09:32.725254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.111 [2024-07-24 05:09:32.725499] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.111 [2024-07-24 05:09:32.725510] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7530738e-4cfd-417f-b87d-53757612b8c5 00:20:18.111 [2024-07-24 05:09:32.725521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.111 [2024-07-24 05:09:32.725531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.111 [2024-07-24 05:09:32.725556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.111 [2024-07-24 05:09:32.725567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.111 [2024-07-24 05:09:32.725578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.111 [2024-07-24 05:09:32.725588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.111 [2024-07-24 05:09:32.725598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.111 [2024-07-24 05:09:32.725607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.111 [2024-07-24 05:09:32.725616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.111 [2024-07-24 05:09:32.725627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.111 [2024-07-24 05:09:32.725642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.111 [2024-07-24 05:09:32.725654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:20:18.111 [2024-07-24 05:09:32.725664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.369 [2024-07-24 05:09:32.743583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.369 [2024-07-24 05:09:32.743646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.369 [2024-07-24 05:09:32.743662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.893 ms 00:20:18.369 [2024-07-24 05:09:32.743687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.369 [2024-07-24 05:09:32.744198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.369 [2024-07-24 05:09:32.744231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.369 [2024-07-24 05:09:32.744245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:20:18.369 [2024-07-24 05:09:32.744256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.369 [2024-07-24 05:09:32.781532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.781597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.370 [2024-07-24 05:09:32.781612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.781622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.781750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.781786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.370 [2024-07-24 05:09:32.781797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.781807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.781910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.781930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.370 [2024-07-24 05:09:32.781943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.781953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.781979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.781998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.370 [2024-07-24 05:09:32.782009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.782026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.865774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.865863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.370 [2024-07-24 05:09:32.865882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.865893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.370 [2024-07-24 05:09:32.936275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.370 [2024-07-24 05:09:32.936411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.370 [2024-07-24 05:09:32.936481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.370 [2024-07-24 05:09:32.936666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.370 [2024-07-24 05:09:32.936750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.370 [2024-07-24 05:09:32.936856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.936936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.370 [2024-07-24 05:09:32.936954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.370 [2024-07-24 05:09:32.936971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.370 [2024-07-24 05:09:32.936982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.370 [2024-07-24 05:09:32.937143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.353 ms, result 0 00:20:19.305 00:20:19.305 00:20:19.305 05:09:33 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.872 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:19.872 05:09:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:20.131 05:09:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80574 00:20:20.131 05:09:34 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80574 ']' 00:20:20.131 05:09:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80574 00:20:20.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80574) - No such process 00:20:20.131 Process with pid 80574 is not found 00:20:20.131 05:09:34 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 80574 is not found' 00:20:20.131 00:20:20.131 real 1m10.500s 00:20:20.131 user 1m34.609s 00:20:20.131 sys 0m6.295s 00:20:20.131 05:09:34 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.131 05:09:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:20.131 ************************************ 00:20:20.131 END TEST ftl_trim 00:20:20.131 ************************************ 00:20:20.131 05:09:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:20.131 05:09:34 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:20.131 05:09:34 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.131 05:09:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:20.131 ************************************ 00:20:20.131 START TEST ftl_restore 00:20:20.131 ************************************ 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:20.131 * Looking for test storage... 00:20:20.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qt6gJO3xlD 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80848 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80848 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 80848 ']' 00:20:20.131 05:09:34 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.131 05:09:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:20.391 [2024-07-24 05:09:34.763013] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:20:20.391 [2024-07-24 05:09:34.763172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80848 ] 00:20:20.391 [2024-07-24 05:09:34.926108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.649 [2024-07-24 05:09:35.090487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.216 05:09:35 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.216 05:09:35 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:21.216 05:09:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:21.474 05:09:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:21.474 05:09:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:21.474 05:09:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:21.474 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:20:21.474 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_info 00:20:21.474 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bs 00:20:21.474 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local nb 00:20:21.474 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:21.733 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:20:21.733 { 00:20:21.733 "name": "nvme0n1", 00:20:21.733 "aliases": [ 00:20:21.733 "2cb122fb-c42d-4a72-a6fe-823401e174f2" 00:20:21.733 ], 00:20:21.733 "product_name": "NVMe disk", 00:20:21.733 "block_size": 4096, 00:20:21.733 "num_blocks": 1310720, 00:20:21.733 "uuid": "2cb122fb-c42d-4a72-a6fe-823401e174f2", 00:20:21.733 "assigned_rate_limits": { 00:20:21.733 "rw_ios_per_sec": 0, 00:20:21.733 "rw_mbytes_per_sec": 0, 00:20:21.733 "r_mbytes_per_sec": 0, 00:20:21.733 "w_mbytes_per_sec": 0 00:20:21.733 }, 00:20:21.733 "claimed": true, 00:20:21.733 "claim_type": "read_many_write_one", 00:20:21.733 "zoned": false, 00:20:21.733 "supported_io_types": { 00:20:21.733 "read": true, 00:20:21.733 "write": true, 00:20:21.733 "unmap": true, 00:20:21.733 "flush": true, 00:20:21.733 "reset": true, 00:20:21.733 "nvme_admin": true, 00:20:21.733 "nvme_io": true, 00:20:21.733 "nvme_io_md": false, 00:20:21.733 "write_zeroes": true, 00:20:21.733 "zcopy": false, 00:20:21.733 "get_zone_info": false, 00:20:21.733 "zone_management": false, 00:20:21.733 "zone_append": false, 00:20:21.733 "compare": true, 00:20:21.733 "compare_and_write": false, 00:20:21.733 "abort": true, 00:20:21.733 "seek_hole": false, 00:20:21.733 "seek_data": false, 00:20:21.733 "copy": true, 00:20:21.733 "nvme_iov_md": false 00:20:21.733 }, 00:20:21.733 "driver_specific": { 00:20:21.733 "nvme": [ 00:20:21.733 { 00:20:21.733 "pci_address": "0000:00:11.0", 00:20:21.733 "trid": { 00:20:21.733 "trtype": "PCIe", 00:20:21.733 "traddr": "0000:00:11.0" 00:20:21.733 }, 00:20:21.733 "ctrlr_data": { 00:20:21.733 "cntlid": 0, 00:20:21.733 "vendor_id": "0x1b36", 00:20:21.733 "model_number": "QEMU NVMe Ctrl", 00:20:21.733 "serial_number": "12341", 00:20:21.733 "firmware_revision": "8.0.0", 00:20:21.733 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:21.733 "oacs": { 00:20:21.733 "security": 0, 00:20:21.733 "format": 1, 00:20:21.733 "firmware": 0, 00:20:21.733 "ns_manage": 1 00:20:21.733 }, 00:20:21.733 "multi_ctrlr": false, 00:20:21.733 "ana_reporting": false 00:20:21.733 }, 00:20:21.733 "vs": { 00:20:21.733 "nvme_version": "1.4" 00:20:21.733 }, 00:20:21.733 "ns_data": { 00:20:21.733 "id": 1, 00:20:21.733 "can_share": false 00:20:21.733 } 00:20:21.733 } 00:20:21.733 ], 00:20:21.733 "mp_policy": "active_passive" 00:20:21.733 } 00:20:21.733 } 00:20:21.733 ]' 00:20:21.733 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:20:21.993 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bs=4096 00:20:21.993 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:20:21.993 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # nb=1310720 00:20:21.993 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:20:21.993 05:09:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # echo 5120 00:20:21.993 05:09:36 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:21.993 05:09:36 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:21.993 05:09:36 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:21.993 05:09:36 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:21.993 05:09:36 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:22.252 05:09:36 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5da2d41f-f572-4282-9b95-f107de292170 00:20:22.252 05:09:36 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:22.252 05:09:36 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5da2d41f-f572-4282-9b95-f107de292170 00:20:22.252 05:09:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:22.511 05:09:37 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=28360d4b-9040-48c1-9cec-cdb60e129cbc 00:20:22.511 05:09:37 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 28360d4b-9040-48c1-9cec-cdb60e129cbc 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:22.770 05:09:37 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:22.770 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bdev_name=07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:22.770 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_info 00:20:22.770 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bs 00:20:22.770 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local nb 00:20:22.770 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:20:23.039 { 00:20:23.039 "name": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:23.039 "aliases": [ 00:20:23.039 "lvs/nvme0n1p0" 00:20:23.039 ], 00:20:23.039 "product_name": "Logical Volume", 00:20:23.039 "block_size": 4096, 00:20:23.039 "num_blocks": 26476544, 00:20:23.039 "uuid": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:23.039 "assigned_rate_limits": { 00:20:23.039 "rw_ios_per_sec": 0, 00:20:23.039 "rw_mbytes_per_sec": 0, 00:20:23.039 "r_mbytes_per_sec": 0, 00:20:23.039 "w_mbytes_per_sec": 0 00:20:23.039 }, 00:20:23.039 "claimed": false, 00:20:23.039 "zoned": false, 00:20:23.039 "supported_io_types": { 00:20:23.039 "read": true, 00:20:23.039 "write": true, 00:20:23.039 "unmap": true, 00:20:23.039 "flush": false, 00:20:23.039 "reset": true, 00:20:23.039 "nvme_admin": false, 00:20:23.039 "nvme_io": false, 00:20:23.039 "nvme_io_md": false, 00:20:23.039 "write_zeroes": true, 00:20:23.039 "zcopy": false, 00:20:23.039 "get_zone_info": false, 00:20:23.039 "zone_management": false, 00:20:23.039 "zone_append": false, 00:20:23.039 "compare": false, 00:20:23.039 "compare_and_write": false, 00:20:23.039 "abort": false, 00:20:23.039 "seek_hole": true, 00:20:23.039 "seek_data": true, 00:20:23.039 "copy": false, 00:20:23.039 "nvme_iov_md": false 00:20:23.039 }, 00:20:23.039 "driver_specific": { 00:20:23.039 "lvol": { 00:20:23.039 "lvol_store_uuid": "28360d4b-9040-48c1-9cec-cdb60e129cbc", 00:20:23.039 "base_bdev": "nvme0n1", 00:20:23.039 "thin_provision": true, 00:20:23.039 "num_allocated_clusters": 0, 00:20:23.039 "snapshot": false, 00:20:23.039 "clone": false, 00:20:23.039 "esnap_clone": false 00:20:23.039 } 00:20:23.039 } 00:20:23.039 } 00:20:23.039 ]' 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bs=4096 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # nb=26476544 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:20:23.039 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # echo 103424 00:20:23.039 05:09:37 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:23.039 05:09:37 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:23.039 05:09:37 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:23.312 05:09:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:23.312 05:09:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:23.312 05:09:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:23.312 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bdev_name=07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:23.312 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_info 00:20:23.312 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bs 00:20:23.312 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local nb 00:20:23.312 05:09:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:23.571 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:20:23.571 { 00:20:23.571 "name": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:23.571 "aliases": [ 00:20:23.571 "lvs/nvme0n1p0" 00:20:23.571 ], 00:20:23.571 "product_name": "Logical Volume", 00:20:23.571 "block_size": 4096, 00:20:23.571 "num_blocks": 26476544, 00:20:23.571 "uuid": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:23.571 "assigned_rate_limits": { 00:20:23.571 "rw_ios_per_sec": 0, 00:20:23.571 "rw_mbytes_per_sec": 0, 00:20:23.571 "r_mbytes_per_sec": 0, 00:20:23.571 "w_mbytes_per_sec": 0 00:20:23.571 }, 00:20:23.571 "claimed": false, 00:20:23.571 "zoned": false, 00:20:23.571 "supported_io_types": { 00:20:23.571 "read": true, 00:20:23.571 "write": true, 00:20:23.571 "unmap": true, 00:20:23.571 "flush": false, 00:20:23.571 "reset": true, 00:20:23.571 "nvme_admin": false, 00:20:23.571 "nvme_io": false, 00:20:23.571 "nvme_io_md": false, 00:20:23.571 "write_zeroes": true, 00:20:23.571 "zcopy": false, 00:20:23.571 "get_zone_info": false, 00:20:23.571 "zone_management": false, 00:20:23.571 "zone_append": false, 00:20:23.571 "compare": false, 00:20:23.571 "compare_and_write": false, 00:20:23.571 "abort": false, 00:20:23.571 "seek_hole": true, 00:20:23.571 "seek_data": true, 00:20:23.571 "copy": false, 00:20:23.571 "nvme_iov_md": false 00:20:23.571 }, 00:20:23.571 "driver_specific": { 00:20:23.571 "lvol": { 00:20:23.571 "lvol_store_uuid": "28360d4b-9040-48c1-9cec-cdb60e129cbc", 00:20:23.571 "base_bdev": "nvme0n1", 00:20:23.571 "thin_provision": true, 00:20:23.571 "num_allocated_clusters": 0, 00:20:23.571 "snapshot": false, 00:20:23.571 "clone": false, 00:20:23.571 "esnap_clone": false 00:20:23.571 } 00:20:23.571 } 00:20:23.571 } 00:20:23.571 ]' 00:20:23.571 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:20:23.571 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bs=4096 00:20:23.571 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:20:23.830 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # nb=26476544 00:20:23.830 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:20:23.830 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # echo 103424 00:20:23.830 05:09:38 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:23.830 05:09:38 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:24.089 05:09:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:24.089 05:09:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:24.089 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bdev_name=07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:24.089 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_info 00:20:24.089 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bs 00:20:24.089 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local nb 00:20:24.089 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07eb1837-6ddd-4f55-a758-92d3162263ff 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:20:24.348 { 00:20:24.348 "name": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:24.348 "aliases": [ 00:20:24.348 "lvs/nvme0n1p0" 00:20:24.348 ], 00:20:24.348 "product_name": "Logical Volume", 00:20:24.348 "block_size": 4096, 00:20:24.348 "num_blocks": 26476544, 00:20:24.348 "uuid": "07eb1837-6ddd-4f55-a758-92d3162263ff", 00:20:24.348 "assigned_rate_limits": { 00:20:24.348 "rw_ios_per_sec": 0, 00:20:24.348 "rw_mbytes_per_sec": 0, 00:20:24.348 "r_mbytes_per_sec": 0, 00:20:24.348 "w_mbytes_per_sec": 0 00:20:24.348 }, 00:20:24.348 "claimed": false, 00:20:24.348 "zoned": false, 00:20:24.348 "supported_io_types": { 00:20:24.348 "read": true, 00:20:24.348 "write": true, 00:20:24.348 "unmap": true, 00:20:24.348 "flush": false, 00:20:24.348 "reset": true, 00:20:24.348 "nvme_admin": false, 00:20:24.348 "nvme_io": false, 00:20:24.348 "nvme_io_md": false, 00:20:24.348 "write_zeroes": true, 00:20:24.348 "zcopy": false, 00:20:24.348 "get_zone_info": false, 00:20:24.348 "zone_management": false, 00:20:24.348 "zone_append": false, 00:20:24.348 "compare": false, 00:20:24.348 "compare_and_write": false, 00:20:24.348 "abort": false, 00:20:24.348 "seek_hole": true, 00:20:24.348 "seek_data": true, 00:20:24.348 "copy": false, 00:20:24.348 "nvme_iov_md": false 00:20:24.348 }, 00:20:24.348 "driver_specific": { 00:20:24.348 "lvol": { 00:20:24.348 "lvol_store_uuid": "28360d4b-9040-48c1-9cec-cdb60e129cbc", 00:20:24.348 "base_bdev": "nvme0n1", 00:20:24.348 "thin_provision": true, 00:20:24.348 "num_allocated_clusters": 0, 00:20:24.348 "snapshot": false, 00:20:24.348 "clone": false, 00:20:24.348 "esnap_clone": false 00:20:24.348 } 00:20:24.348 } 00:20:24.348 } 00:20:24.348 ]' 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bs=4096 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # nb=26476544 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:20:24.348 05:09:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # echo 103424 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 07eb1837-6ddd-4f55-a758-92d3162263ff --l2p_dram_limit 10' 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:24.348 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:24.348 05:09:38 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 07eb1837-6ddd-4f55-a758-92d3162263ff --l2p_dram_limit 10 -c nvc0n1p0 00:20:24.608 [2024-07-24 05:09:39.065436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.065516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.608 [2024-07-24 05:09:39.065537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:24.608 [2024-07-24 05:09:39.065550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.065626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.065645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.608 [2024-07-24 05:09:39.065657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:24.608 [2024-07-24 05:09:39.065670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.065696] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.608 [2024-07-24 05:09:39.066692] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.608 [2024-07-24 05:09:39.066733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.066751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.608 [2024-07-24 05:09:39.066764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:20:24.608 [2024-07-24 05:09:39.066776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.066926] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7194b92d-8659-4940-8cfd-a816cffde944 00:20:24.608 [2024-07-24 05:09:39.068130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.068199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:24.608 [2024-07-24 05:09:39.068233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:24.608 [2024-07-24 05:09:39.068245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.073298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.073344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.608 [2024-07-24 05:09:39.073365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:20:24.608 [2024-07-24 05:09:39.073376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.073497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.073518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.608 [2024-07-24 05:09:39.073534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:24.608 [2024-07-24 05:09:39.073545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.608 [2024-07-24 05:09:39.073636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.608 [2024-07-24 05:09:39.073653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.608 [2024-07-24 05:09:39.073685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:24.609 [2024-07-24 05:09:39.073696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.609 [2024-07-24 05:09:39.073731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.609 [2024-07-24 05:09:39.078530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.609 [2024-07-24 05:09:39.078589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.609 [2024-07-24 05:09:39.078605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:20:24.609 [2024-07-24 05:09:39.078618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.609 [2024-07-24 05:09:39.078663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.609 [2024-07-24 05:09:39.078682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.609 [2024-07-24 05:09:39.078693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:24.609 [2024-07-24 05:09:39.078706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.609 [2024-07-24 05:09:39.078785] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:24.609 [2024-07-24 05:09:39.079063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.609 [2024-07-24 05:09:39.079088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.609 [2024-07-24 05:09:39.079110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:24.609 [2024-07-24 05:09:39.079126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079157] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079170] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:24.609 [2024-07-24 05:09:39.079200] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.609 [2024-07-24 05:09:39.079211] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.609 [2024-07-24 05:09:39.079223] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.609 [2024-07-24 05:09:39.079235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.609 [2024-07-24 05:09:39.079248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.609 [2024-07-24 05:09:39.079260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:20:24.609 [2024-07-24 05:09:39.079285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.609 [2024-07-24 05:09:39.079408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.609 [2024-07-24 05:09:39.079427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.609 [2024-07-24 05:09:39.079440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:24.609 [2024-07-24 05:09:39.079458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.609 [2024-07-24 05:09:39.079568] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.609 [2024-07-24 05:09:39.079590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.609 [2024-07-24 05:09:39.079616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.609 [2024-07-24 05:09:39.079657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.609 [2024-07-24 05:09:39.079694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.609 [2024-07-24 05:09:39.079720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.609 [2024-07-24 05:09:39.079733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:24.609 [2024-07-24 05:09:39.079744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.609 [2024-07-24 05:09:39.079757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.609 [2024-07-24 05:09:39.079769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:24.609 [2024-07-24 05:09:39.079781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.609 [2024-07-24 05:09:39.079808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.609 [2024-07-24 05:09:39.079854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.609 [2024-07-24 05:09:39.079911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079922] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.609 [2024-07-24 05:09:39.079946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.609 [2024-07-24 05:09:39.079970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.609 [2024-07-24 05:09:39.079983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:24.609 [2024-07-24 05:09:39.079994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.609 [2024-07-24 05:09:39.080007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.609 [2024-07-24 05:09:39.080018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:24.609 [2024-07-24 05:09:39.080033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.609 [2024-07-24 05:09:39.080044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.609 [2024-07-24 05:09:39.080058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:24.609 [2024-07-24 05:09:39.080070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.609 [2024-07-24 05:09:39.080083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.609 [2024-07-24 05:09:39.080094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:24.609 [2024-07-24 05:09:39.080106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.080118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.609 [2024-07-24 05:09:39.080132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:24.609 [2024-07-24 05:09:39.080144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.080171] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.609 [2024-07-24 05:09:39.080184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.609 [2024-07-24 05:09:39.080197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.609 [2024-07-24 05:09:39.080208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.609 [2024-07-24 05:09:39.080222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.609 [2024-07-24 05:09:39.080233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.609 [2024-07-24 05:09:39.080261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.609 [2024-07-24 05:09:39.080273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.609 [2024-07-24 05:09:39.080285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.609 [2024-07-24 05:09:39.080295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.609 [2024-07-24 05:09:39.080313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.609 [2024-07-24 05:09:39.080329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.609 [2024-07-24 05:09:39.080344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:24.609 [2024-07-24 05:09:39.080355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:24.609 [2024-07-24 05:09:39.080369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:24.609 [2024-07-24 05:09:39.080380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:24.609 [2024-07-24 05:09:39.080395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:24.609 [2024-07-24 05:09:39.080406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:24.609 [2024-07-24 05:09:39.080419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:24.609 [2024-07-24 05:09:39.080431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:24.609 [2024-07-24 05:09:39.080444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:24.609 [2024-07-24 05:09:39.080456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:24.609 [2024-07-24 05:09:39.080470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:24.609 [2024-07-24 05:09:39.080482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:24.609 [2024-07-24 05:09:39.080495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:24.609 [2024-07-24 05:09:39.080507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:24.610 [2024-07-24 05:09:39.080520] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.610 [2024-07-24 05:09:39.080533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.610 [2024-07-24 05:09:39.080547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.610 [2024-07-24 05:09:39.080559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.610 [2024-07-24 05:09:39.080574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.610 [2024-07-24 05:09:39.080586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.610 [2024-07-24 05:09:39.080600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.610 [2024-07-24 05:09:39.080612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.610 [2024-07-24 05:09:39.080626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:20:24.610 [2024-07-24 05:09:39.080638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.610 [2024-07-24 05:09:39.080693] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:24.610 [2024-07-24 05:09:39.080710] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:26.512 [2024-07-24 05:09:41.102542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.512 [2024-07-24 05:09:41.102612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:26.512 [2024-07-24 05:09:41.102651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2021.858 ms 00:20:26.512 [2024-07-24 05:09:41.102663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.512 [2024-07-24 05:09:41.131860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.512 [2024-07-24 05:09:41.131955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.512 [2024-07-24 05:09:41.131995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.916 ms 00:20:26.512 [2024-07-24 05:09:41.132007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.512 [2024-07-24 05:09:41.132184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.512 [2024-07-24 05:09:41.132203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:26.512 [2024-07-24 05:09:41.132222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:26.512 [2024-07-24 05:09:41.132233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.168361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.168416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.771 [2024-07-24 05:09:41.168454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.021 ms 00:20:26.771 [2024-07-24 05:09:41.168466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.168524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.168539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.771 [2024-07-24 05:09:41.168559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.771 [2024-07-24 05:09:41.168571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.169202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.169277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.771 [2024-07-24 05:09:41.169537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:26.771 [2024-07-24 05:09:41.169592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.169747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.169769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.771 [2024-07-24 05:09:41.169784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:26.771 [2024-07-24 05:09:41.169795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.186137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.186180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.771 [2024-07-24 05:09:41.186215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.313 ms 00:20:26.771 [2024-07-24 05:09:41.186226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.198633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:26.771 [2024-07-24 05:09:41.201407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.201458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.771 [2024-07-24 05:09:41.201474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.077 ms 00:20:26.771 [2024-07-24 05:09:41.201487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.271200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.271325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:26.771 [2024-07-24 05:09:41.271348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.677 ms 00:20:26.771 [2024-07-24 05:09:41.271362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.271578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.271632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.771 [2024-07-24 05:09:41.271645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:20:26.771 [2024-07-24 05:09:41.271659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.299022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.299116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:26.771 [2024-07-24 05:09:41.299151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.287 ms 00:20:26.771 [2024-07-24 05:09:41.299167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.327659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.327733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:26.771 [2024-07-24 05:09:41.327750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.447 ms 00:20:26.771 [2024-07-24 05:09:41.327763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.771 [2024-07-24 05:09:41.328604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.771 [2024-07-24 05:09:41.328658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.771 [2024-07-24 05:09:41.328676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:20:26.771 [2024-07-24 05:09:41.328690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.410011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.410096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:27.031 [2024-07-24 05:09:41.410116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.244 ms 00:20:27.031 [2024-07-24 05:09:41.410134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.438009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.438070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:27.031 [2024-07-24 05:09:41.438089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.822 ms 00:20:27.031 [2024-07-24 05:09:41.438102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.465128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.465187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:27.031 [2024-07-24 05:09:41.465203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.980 ms 00:20:27.031 [2024-07-24 05:09:41.465216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.492742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.492819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.031 [2024-07-24 05:09:41.492837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.467 ms 00:20:27.031 [2024-07-24 05:09:41.492865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.492949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.492981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.031 [2024-07-24 05:09:41.492995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:27.031 [2024-07-24 05:09:41.493011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.493120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.031 [2024-07-24 05:09:41.493146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.031 [2024-07-24 05:09:41.493158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:27.031 [2024-07-24 05:09:41.493171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.031 [2024-07-24 05:09:41.494472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2428.494 ms, result 0 00:20:27.031 { 00:20:27.031 "name": "ftl0", 00:20:27.031 "uuid": "7194b92d-8659-4940-8cfd-a816cffde944" 00:20:27.031 } 00:20:27.031 05:09:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:27.031 05:09:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:27.290 05:09:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:27.290 05:09:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:27.550 [2024-07-24 05:09:41.937765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.550 [2024-07-24 05:09:41.937872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.550 [2024-07-24 05:09:41.937901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.550 [2024-07-24 05:09:41.937915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.550 [2024-07-24 05:09:41.937957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.550 [2024-07-24 05:09:41.941403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.550 [2024-07-24 05:09:41.941456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.550 [2024-07-24 05:09:41.941471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:20:27.550 [2024-07-24 05:09:41.941484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.550 [2024-07-24 05:09:41.941841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.550 [2024-07-24 05:09:41.941873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.550 [2024-07-24 05:09:41.941898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:27.550 [2024-07-24 05:09:41.941933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.550 [2024-07-24 05:09:41.945440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.550 [2024-07-24 05:09:41.945489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.550 [2024-07-24 05:09:41.945504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:20:27.550 [2024-07-24 05:09:41.945516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.550 [2024-07-24 05:09:41.952079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.550 [2024-07-24 05:09:41.952145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:27.550 [2024-07-24 05:09:41.952175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:20:27.551 [2024-07-24 05:09:41.952187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:41.980292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:41.980355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:27.551 [2024-07-24 05:09:41.980373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.014 ms 00:20:27.551 [2024-07-24 05:09:41.980386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:41.997599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:41.997663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:27.551 [2024-07-24 05:09:41.997681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.166 ms 00:20:27.551 [2024-07-24 05:09:41.997694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:41.997903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:41.997930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:27.551 [2024-07-24 05:09:41.997944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:27.551 [2024-07-24 05:09:41.997956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:42.026904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:42.026971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:27.551 [2024-07-24 05:09:42.026991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.921 ms 00:20:27.551 [2024-07-24 05:09:42.027004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:42.054840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:42.054943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:27.551 [2024-07-24 05:09:42.054963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.788 ms 00:20:27.551 [2024-07-24 05:09:42.054976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:42.081490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:42.081551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.551 [2024-07-24 05:09:42.081568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.466 ms 00:20:27.551 [2024-07-24 05:09:42.081580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:42.108674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.551 [2024-07-24 05:09:42.108733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.551 [2024-07-24 05:09:42.108750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.990 ms 00:20:27.551 [2024-07-24 05:09:42.108762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.551 [2024-07-24 05:09:42.108807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.551 [2024-07-24 05:09:42.108834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.108995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.551 [2024-07-24 05:09:42.109843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.109987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.552 [2024-07-24 05:09:42.110381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.552 [2024-07-24 05:09:42.110393] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7194b92d-8659-4940-8cfd-a816cffde944 00:20:27.552 [2024-07-24 05:09:42.110406] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.552 [2024-07-24 05:09:42.110417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.552 [2024-07-24 05:09:42.110431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.552 [2024-07-24 05:09:42.110442] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.552 [2024-07-24 05:09:42.110455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.552 [2024-07-24 05:09:42.110466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.552 [2024-07-24 05:09:42.110478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.552 [2024-07-24 05:09:42.110488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.552 [2024-07-24 05:09:42.110499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.552 [2024-07-24 05:09:42.110510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.552 [2024-07-24 05:09:42.110523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.552 [2024-07-24 05:09:42.110535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:20:27.552 [2024-07-24 05:09:42.110552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.125128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.552 [2024-07-24 05:09:42.125183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.552 [2024-07-24 05:09:42.125199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.515 ms 00:20:27.552 [2024-07-24 05:09:42.125212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.125582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.552 [2024-07-24 05:09:42.125609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.552 [2024-07-24 05:09:42.125627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:27.552 [2024-07-24 05:09:42.125639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.170977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.552 [2024-07-24 05:09:42.171053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.552 [2024-07-24 05:09:42.171079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.552 [2024-07-24 05:09:42.171093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.171188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.552 [2024-07-24 05:09:42.171207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.552 [2024-07-24 05:09:42.171221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.552 [2024-07-24 05:09:42.171246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.171401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.552 [2024-07-24 05:09:42.171427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.552 [2024-07-24 05:09:42.171441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.552 [2024-07-24 05:09:42.171456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.552 [2024-07-24 05:09:42.171485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.552 [2024-07-24 05:09:42.171506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.552 [2024-07-24 05:09:42.171519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.552 [2024-07-24 05:09:42.171537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.263679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.263780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.812 [2024-07-24 05:09:42.263810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.263823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.338590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.338674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.812 [2024-07-24 05:09:42.338696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.338709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.338819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.338841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.812 [2024-07-24 05:09:42.338874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.338925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.339035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.812 [2024-07-24 05:09:42.339048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.339061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.339209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.812 [2024-07-24 05:09:42.339222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.339235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.339353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.812 [2024-07-24 05:09:42.339367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.339381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.339450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.812 [2024-07-24 05:09:42.339463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.339477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.812 [2024-07-24 05:09:42.339569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.812 [2024-07-24 05:09:42.339583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.812 [2024-07-24 05:09:42.339596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.812 [2024-07-24 05:09:42.339795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.987 ms, result 0 00:20:27.812 true 00:20:27.812 05:09:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80848 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80848 ']' 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80848 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80848 00:20:27.812 killing process with pid 80848 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80848' 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 80848 00:20:27.812 05:09:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 80848 00:20:32.001 05:09:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:36.192 262144+0 records in 00:20:36.192 262144+0 records out 00:20:36.192 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15977 s, 258 MB/s 00:20:36.192 05:09:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:38.096 05:09:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.354 [2024-07-24 05:09:52.764843] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:20:38.354 [2024-07-24 05:09:52.765002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81071 ] 00:20:38.354 [2024-07-24 05:09:52.923518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.613 [2024-07-24 05:09:53.139171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.878 [2024-07-24 05:09:53.400319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.878 [2024-07-24 05:09:53.400411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.142 [2024-07-24 05:09:53.557923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.557984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.142 [2024-07-24 05:09:53.558019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.142 [2024-07-24 05:09:53.558031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.558094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.558112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.142 [2024-07-24 05:09:53.558124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:39.142 [2024-07-24 05:09:53.558138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.558171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.142 [2024-07-24 05:09:53.558986] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.142 [2024-07-24 05:09:53.559026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.559040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.142 [2024-07-24 05:09:53.559059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:20:39.142 [2024-07-24 05:09:53.559070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.560406] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.142 [2024-07-24 05:09:53.574810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.574884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.142 [2024-07-24 05:09:53.574919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.406 ms 00:20:39.142 [2024-07-24 05:09:53.574931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.575037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.575061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.142 [2024-07-24 05:09:53.575075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:39.142 [2024-07-24 05:09:53.575086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.579718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.579758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.142 [2024-07-24 05:09:53.579804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:20:39.142 [2024-07-24 05:09:53.579815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.579940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.579961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.142 [2024-07-24 05:09:53.579973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:39.142 [2024-07-24 05:09:53.580000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.580061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.580078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.142 [2024-07-24 05:09:53.580106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:39.142 [2024-07-24 05:09:53.580117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.580150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.142 [2024-07-24 05:09:53.584180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.584214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.142 [2024-07-24 05:09:53.584244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.039 ms 00:20:39.142 [2024-07-24 05:09:53.584255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.584296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.584312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.142 [2024-07-24 05:09:53.584323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:39.142 [2024-07-24 05:09:53.584333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.142 [2024-07-24 05:09:53.584378] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.142 [2024-07-24 05:09:53.584408] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.142 [2024-07-24 05:09:53.584446] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.142 [2024-07-24 05:09:53.584467] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:39.142 [2024-07-24 05:09:53.584556] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.142 [2024-07-24 05:09:53.584570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.142 [2024-07-24 05:09:53.584582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:39.142 [2024-07-24 05:09:53.584595] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.142 [2024-07-24 05:09:53.584608] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.142 [2024-07-24 05:09:53.584619] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:39.142 [2024-07-24 05:09:53.584629] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.142 [2024-07-24 05:09:53.584639] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.142 [2024-07-24 05:09:53.584649] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.142 [2024-07-24 05:09:53.584659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.142 [2024-07-24 05:09:53.584673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.142 [2024-07-24 05:09:53.584684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:39.143 [2024-07-24 05:09:53.584694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.584771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.584800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.143 [2024-07-24 05:09:53.584811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:39.143 [2024-07-24 05:09:53.584822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.584980] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.143 [2024-07-24 05:09:53.585000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.143 [2024-07-24 05:09:53.585018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.143 [2024-07-24 05:09:53.585052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.143 [2024-07-24 05:09:53.585085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.143 [2024-07-24 05:09:53.585106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.143 [2024-07-24 05:09:53.585117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:39.143 [2024-07-24 05:09:53.585127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.143 [2024-07-24 05:09:53.585155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.143 [2024-07-24 05:09:53.585166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:39.143 [2024-07-24 05:09:53.585180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.143 [2024-07-24 05:09:53.585223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.143 [2024-07-24 05:09:53.585305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.143 [2024-07-24 05:09:53.585351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.143 [2024-07-24 05:09:53.585395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.143 [2024-07-24 05:09:53.585428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.143 [2024-07-24 05:09:53.585460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.143 [2024-07-24 05:09:53.585481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.143 [2024-07-24 05:09:53.585491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:39.143 [2024-07-24 05:09:53.585507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.143 [2024-07-24 05:09:53.585523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.143 [2024-07-24 05:09:53.585534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:39.143 [2024-07-24 05:09:53.585546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.143 [2024-07-24 05:09:53.585572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:39.143 [2024-07-24 05:09:53.585583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585593] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.143 [2024-07-24 05:09:53.585604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.143 [2024-07-24 05:09:53.585619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.143 [2024-07-24 05:09:53.585659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.143 [2024-07-24 05:09:53.585672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.143 [2024-07-24 05:09:53.585683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.143 [2024-07-24 05:09:53.585694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.143 [2024-07-24 05:09:53.585704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.143 [2024-07-24 05:09:53.585716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.143 [2024-07-24 05:09:53.585735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.143 [2024-07-24 05:09:53.585751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.585764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:39.143 [2024-07-24 05:09:53.585775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:39.143 [2024-07-24 05:09:53.585807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:39.143 [2024-07-24 05:09:53.585823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:39.143 [2024-07-24 05:09:53.585834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:39.143 [2024-07-24 05:09:53.585848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:39.143 [2024-07-24 05:09:53.585862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:39.143 [2024-07-24 05:09:53.585873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:39.143 [2024-07-24 05:09:53.585884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:39.143 [2024-07-24 05:09:53.585896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.585911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.585949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.585965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.585977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:39.143 [2024-07-24 05:09:53.585989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.143 [2024-07-24 05:09:53.586002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.586025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.143 [2024-07-24 05:09:53.586044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.143 [2024-07-24 05:09:53.586056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.143 [2024-07-24 05:09:53.586067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.143 [2024-07-24 05:09:53.586080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.586096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.143 [2024-07-24 05:09:53.586109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:20:39.143 [2024-07-24 05:09:53.586136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.628662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.628723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.143 [2024-07-24 05:09:53.628761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.431 ms 00:20:39.143 [2024-07-24 05:09:53.628773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.628947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.628969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.143 [2024-07-24 05:09:53.628983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:39.143 [2024-07-24 05:09:53.629005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.669860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.669928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.143 [2024-07-24 05:09:53.669950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.752 ms 00:20:39.143 [2024-07-24 05:09:53.669964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.143 [2024-07-24 05:09:53.670033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.143 [2024-07-24 05:09:53.670052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.143 [2024-07-24 05:09:53.670066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.144 [2024-07-24 05:09:53.670084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.670630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.670659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.144 [2024-07-24 05:09:53.670674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:20:39.144 [2024-07-24 05:09:53.670686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.670894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.670923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.144 [2024-07-24 05:09:53.670949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:20:39.144 [2024-07-24 05:09:53.670962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.688491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.688534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.144 [2024-07-24 05:09:53.688567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.475 ms 00:20:39.144 [2024-07-24 05:09:53.688583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.705097] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:39.144 [2024-07-24 05:09:53.705155] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.144 [2024-07-24 05:09:53.705192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.705219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.144 [2024-07-24 05:09:53.705237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.427 ms 00:20:39.144 [2024-07-24 05:09:53.705249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.730960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.731010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.144 [2024-07-24 05:09:53.731058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:20:39.144 [2024-07-24 05:09:53.731076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.745394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.745434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.144 [2024-07-24 05:09:53.745467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:20:39.144 [2024-07-24 05:09:53.745479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.759192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.759230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.144 [2024-07-24 05:09:53.759261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.670 ms 00:20:39.144 [2024-07-24 05:09:53.759272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.144 [2024-07-24 05:09:53.760132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.144 [2024-07-24 05:09:53.760174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.144 [2024-07-24 05:09:53.760190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:20:39.144 [2024-07-24 05:09:53.760202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.824290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.824357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.403 [2024-07-24 05:09:53.824392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.061 ms 00:20:39.403 [2024-07-24 05:09:53.824403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.835089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:39.403 [2024-07-24 05:09:53.837264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.837297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.403 [2024-07-24 05:09:53.837328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:20:39.403 [2024-07-24 05:09:53.837339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.837437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.837456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.403 [2024-07-24 05:09:53.837468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:39.403 [2024-07-24 05:09:53.837478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.837561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.837582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.403 [2024-07-24 05:09:53.837594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:39.403 [2024-07-24 05:09:53.837604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.837628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.837641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.403 [2024-07-24 05:09:53.837651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.403 [2024-07-24 05:09:53.837661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.837698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.403 [2024-07-24 05:09:53.837714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.837723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.403 [2024-07-24 05:09:53.837738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:39.403 [2024-07-24 05:09:53.837748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.863000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.863177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.403 [2024-07-24 05:09:53.863382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.230 ms 00:20:39.403 [2024-07-24 05:09:53.863441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.863682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.403 [2024-07-24 05:09:53.863778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.403 [2024-07-24 05:09:53.863993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:39.403 [2024-07-24 05:09:53.864056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.403 [2024-07-24 05:09:53.865530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.952 ms, result 0 00:21:23.004  Copying: 20/1024 [MB] (20 MBps) Copying: 44/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (24 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 185/1024 [MB] (23 MBps) Copying: 208/1024 [MB] (23 MBps) Copying: 231/1024 [MB] (23 MBps) Copying: 255/1024 [MB] (23 MBps) Copying: 279/1024 [MB] (23 MBps) Copying: 303/1024 [MB] (24 MBps) Copying: 326/1024 [MB] (22 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (23 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 421/1024 [MB] (23 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 491/1024 [MB] (23 MBps) Copying: 515/1024 [MB] (23 MBps) Copying: 538/1024 [MB] (23 MBps) Copying: 562/1024 [MB] (23 MBps) Copying: 585/1024 [MB] (23 MBps) Copying: 609/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (23 MBps) Copying: 656/1024 [MB] (23 MBps) Copying: 678/1024 [MB] (22 MBps) Copying: 702/1024 [MB] (23 MBps) Copying: 725/1024 [MB] (23 MBps) Copying: 749/1024 [MB] (23 MBps) Copying: 773/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (24 MBps) Copying: 821/1024 [MB] (24 MBps) Copying: 844/1024 [MB] (23 MBps) Copying: 868/1024 [MB] (23 MBps) Copying: 891/1024 [MB] (23 MBps) Copying: 914/1024 [MB] (22 MBps) Copying: 937/1024 [MB] (23 MBps) Copying: 961/1024 [MB] (23 MBps) Copying: 984/1024 [MB] (23 MBps) Copying: 1007/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:10:37.574720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.574804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.004 [2024-07-24 05:10:37.574843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:23.004 [2024-07-24 05:10:37.574870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.004 [2024-07-24 05:10:37.574918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:23.004 [2024-07-24 05:10:37.578204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.578238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.004 [2024-07-24 05:10:37.578269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:21:23.004 [2024-07-24 05:10:37.578279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.004 [2024-07-24 05:10:37.579967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.580005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.004 [2024-07-24 05:10:37.580020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:21:23.004 [2024-07-24 05:10:37.580031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.004 [2024-07-24 05:10:37.595177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.595218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:23.004 [2024-07-24 05:10:37.595251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.126 ms 00:21:23.004 [2024-07-24 05:10:37.595262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.004 [2024-07-24 05:10:37.601346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.601387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:23.004 [2024-07-24 05:10:37.601418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.046 ms 00:21:23.004 [2024-07-24 05:10:37.601428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.004 [2024-07-24 05:10:37.632494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.004 [2024-07-24 05:10:37.632548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:23.004 [2024-07-24 05:10:37.632596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.009 ms 00:21:23.004 [2024-07-24 05:10:37.632610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.649575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.649631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:23.264 [2024-07-24 05:10:37.649655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.923 ms 00:21:23.264 [2024-07-24 05:10:37.649668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.649827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.649848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:23.264 [2024-07-24 05:10:37.649878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:23.264 [2024-07-24 05:10:37.649921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.680969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.681014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:23.264 [2024-07-24 05:10:37.681031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.026 ms 00:21:23.264 [2024-07-24 05:10:37.681043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.709600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.709639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:23.264 [2024-07-24 05:10:37.709670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.515 ms 00:21:23.264 [2024-07-24 05:10:37.709681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.738220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.738260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.264 [2024-07-24 05:10:37.738291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.498 ms 00:21:23.264 [2024-07-24 05:10:37.738317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.766226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.264 [2024-07-24 05:10:37.766279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.264 [2024-07-24 05:10:37.766311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.827 ms 00:21:23.264 [2024-07-24 05:10:37.766322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.264 [2024-07-24 05:10:37.766362] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.264 [2024-07-24 05:10:37.766386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.264 [2024-07-24 05:10:37.766550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.766993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.265 [2024-07-24 05:10:37.767669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.265 [2024-07-24 05:10:37.767682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7194b92d-8659-4940-8cfd-a816cffde944 00:21:23.265 [2024-07-24 05:10:37.767694] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:23.265 [2024-07-24 05:10:37.767712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:23.266 [2024-07-24 05:10:37.767723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:23.266 [2024-07-24 05:10:37.767735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:23.266 [2024-07-24 05:10:37.767746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.266 [2024-07-24 05:10:37.767758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.266 [2024-07-24 05:10:37.767783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.266 [2024-07-24 05:10:37.767794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.266 [2024-07-24 05:10:37.767804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.266 [2024-07-24 05:10:37.767815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.266 [2024-07-24 05:10:37.767826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.266 [2024-07-24 05:10:37.767838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:21:23.266 [2024-07-24 05:10:37.767866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.782990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.266 [2024-07-24 05:10:37.783027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.266 [2024-07-24 05:10:37.783075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.080 ms 00:21:23.266 [2024-07-24 05:10:37.783101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.783583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.266 [2024-07-24 05:10:37.783615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.266 [2024-07-24 05:10:37.783646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:21:23.266 [2024-07-24 05:10:37.783658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.817277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.266 [2024-07-24 05:10:37.817328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.266 [2024-07-24 05:10:37.817361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.266 [2024-07-24 05:10:37.817372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.817437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.266 [2024-07-24 05:10:37.817452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.266 [2024-07-24 05:10:37.817464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.266 [2024-07-24 05:10:37.817474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.817578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.266 [2024-07-24 05:10:37.817597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.266 [2024-07-24 05:10:37.817609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.266 [2024-07-24 05:10:37.817620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.266 [2024-07-24 05:10:37.817641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.266 [2024-07-24 05:10:37.817654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.266 [2024-07-24 05:10:37.817665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.266 [2024-07-24 05:10:37.817675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.905962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.906025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.525 [2024-07-24 05:10:37.906060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.906071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.525 [2024-07-24 05:10:37.980321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.525 [2024-07-24 05:10:37.980442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.525 [2024-07-24 05:10:37.980542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.525 [2024-07-24 05:10:37.980688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.525 [2024-07-24 05:10:37.980769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.525 [2024-07-24 05:10:37.980904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.980916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.980966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.525 [2024-07-24 05:10:37.980998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.525 [2024-07-24 05:10:37.981010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.525 [2024-07-24 05:10:37.981020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.525 [2024-07-24 05:10:37.981151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.412 ms, result 0 00:21:24.460 00:21:24.460 00:21:24.460 05:10:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:24.719 [2024-07-24 05:10:39.159379] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:21:24.719 [2024-07-24 05:10:39.159521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81532 ] 00:21:24.719 [2024-07-24 05:10:39.322673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.977 [2024-07-24 05:10:39.483598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.236 [2024-07-24 05:10:39.774395] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:25.236 [2024-07-24 05:10:39.774484] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:25.496 [2024-07-24 05:10:39.932416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.932471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:25.496 [2024-07-24 05:10:39.932506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:25.496 [2024-07-24 05:10:39.932516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.932578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.932596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.496 [2024-07-24 05:10:39.932606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:25.496 [2024-07-24 05:10:39.932619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.932651] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:25.496 [2024-07-24 05:10:39.933475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:25.496 [2024-07-24 05:10:39.933507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.933520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.496 [2024-07-24 05:10:39.933531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:21:25.496 [2024-07-24 05:10:39.933541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.934720] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:25.496 [2024-07-24 05:10:39.948949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.949006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:25.496 [2024-07-24 05:10:39.949039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.230 ms 00:21:25.496 [2024-07-24 05:10:39.949050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.949118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.949138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:25.496 [2024-07-24 05:10:39.949150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:25.496 [2024-07-24 05:10:39.949160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.953585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.953625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.496 [2024-07-24 05:10:39.953656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:21:25.496 [2024-07-24 05:10:39.953667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.953754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.953772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.496 [2024-07-24 05:10:39.953783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:25.496 [2024-07-24 05:10:39.953792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.953877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.953931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:25.496 [2024-07-24 05:10:39.953944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:25.496 [2024-07-24 05:10:39.953955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.954013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:25.496 [2024-07-24 05:10:39.957786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.957867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.496 [2024-07-24 05:10:39.957900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.799 ms 00:21:25.496 [2024-07-24 05:10:39.957916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.957960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.957976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:25.496 [2024-07-24 05:10:39.957987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:25.496 [2024-07-24 05:10:39.957997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.958040] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:25.496 [2024-07-24 05:10:39.958067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:25.496 [2024-07-24 05:10:39.958107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:25.496 [2024-07-24 05:10:39.958129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:25.496 [2024-07-24 05:10:39.958259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:25.496 [2024-07-24 05:10:39.958274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:25.496 [2024-07-24 05:10:39.958288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:25.496 [2024-07-24 05:10:39.958302] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958314] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958326] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:25.496 [2024-07-24 05:10:39.958336] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:25.496 [2024-07-24 05:10:39.958346] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:25.496 [2024-07-24 05:10:39.958356] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:25.496 [2024-07-24 05:10:39.958374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.958385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:25.496 [2024-07-24 05:10:39.958397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:21:25.496 [2024-07-24 05:10:39.958407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.958491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.496 [2024-07-24 05:10:39.958505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:25.496 [2024-07-24 05:10:39.958516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:25.496 [2024-07-24 05:10:39.958528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.496 [2024-07-24 05:10:39.958629] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:25.496 [2024-07-24 05:10:39.958658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:25.496 [2024-07-24 05:10:39.958670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:25.496 [2024-07-24 05:10:39.958701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:25.496 [2024-07-24 05:10:39.958731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.496 [2024-07-24 05:10:39.958750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:25.496 [2024-07-24 05:10:39.958760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:25.496 [2024-07-24 05:10:39.958770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.496 [2024-07-24 05:10:39.958779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:25.496 [2024-07-24 05:10:39.958789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:25.496 [2024-07-24 05:10:39.958799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:25.496 [2024-07-24 05:10:39.958820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:25.496 [2024-07-24 05:10:39.958862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:25.496 [2024-07-24 05:10:39.958911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:25.496 [2024-07-24 05:10:39.958939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:25.496 [2024-07-24 05:10:39.958949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.496 [2024-07-24 05:10:39.958958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:25.496 [2024-07-24 05:10:39.958968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:25.497 [2024-07-24 05:10:39.958977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.497 [2024-07-24 05:10:39.958987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:25.497 [2024-07-24 05:10:39.958997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:25.497 [2024-07-24 05:10:39.959006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.497 [2024-07-24 05:10:39.959015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:25.497 [2024-07-24 05:10:39.959025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:25.497 [2024-07-24 05:10:39.959035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.497 [2024-07-24 05:10:39.959046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:25.497 [2024-07-24 05:10:39.959056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:25.497 [2024-07-24 05:10:39.959065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.497 [2024-07-24 05:10:39.959075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:25.497 [2024-07-24 05:10:39.959084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:25.497 [2024-07-24 05:10:39.959094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.497 [2024-07-24 05:10:39.959103] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:25.497 [2024-07-24 05:10:39.959129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:25.497 [2024-07-24 05:10:39.959139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.497 [2024-07-24 05:10:39.959149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.497 [2024-07-24 05:10:39.959174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:25.497 [2024-07-24 05:10:39.959185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:25.497 [2024-07-24 05:10:39.959195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:25.497 [2024-07-24 05:10:39.959204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:25.497 [2024-07-24 05:10:39.959213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:25.497 [2024-07-24 05:10:39.959223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:25.497 [2024-07-24 05:10:39.959233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:25.497 [2024-07-24 05:10:39.959246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:25.497 [2024-07-24 05:10:39.959268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:25.497 [2024-07-24 05:10:39.959278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:25.497 [2024-07-24 05:10:39.959288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:25.497 [2024-07-24 05:10:39.959298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:25.497 [2024-07-24 05:10:39.959352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:25.497 [2024-07-24 05:10:39.959378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:25.497 [2024-07-24 05:10:39.959389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:25.497 [2024-07-24 05:10:39.959400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:25.497 [2024-07-24 05:10:39.959410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:25.497 [2024-07-24 05:10:39.959466] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:25.497 [2024-07-24 05:10:39.959482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:25.497 [2024-07-24 05:10:39.959505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:25.497 [2024-07-24 05:10:39.959516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:25.497 [2024-07-24 05:10:39.959527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:25.497 [2024-07-24 05:10:39.959538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:39.959549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:25.497 [2024-07-24 05:10:39.959560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:21:25.497 [2024-07-24 05:10:39.959571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:39.997714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:39.997774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.497 [2024-07-24 05:10:39.997811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.020 ms 00:21:25.497 [2024-07-24 05:10:39.997837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:39.997982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:39.998016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:25.497 [2024-07-24 05:10:39.998029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:25.497 [2024-07-24 05:10:39.998039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.041443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.041567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.497 [2024-07-24 05:10:40.041602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.308 ms 00:21:25.497 [2024-07-24 05:10:40.041622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.041728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.041754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.497 [2024-07-24 05:10:40.041775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:25.497 [2024-07-24 05:10:40.041805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.042468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.042508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.497 [2024-07-24 05:10:40.042535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:21:25.497 [2024-07-24 05:10:40.042556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.042830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.042884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.497 [2024-07-24 05:10:40.042927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:25.497 [2024-07-24 05:10:40.042982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.057352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.057389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.497 [2024-07-24 05:10:40.057425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.298 ms 00:21:25.497 [2024-07-24 05:10:40.057436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.072015] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:25.497 [2024-07-24 05:10:40.072053] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:25.497 [2024-07-24 05:10:40.072086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.072096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:25.497 [2024-07-24 05:10:40.072107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:21:25.497 [2024-07-24 05:10:40.072116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.098017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.098063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:25.497 [2024-07-24 05:10:40.098096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.859 ms 00:21:25.497 [2024-07-24 05:10:40.098107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.497 [2024-07-24 05:10:40.112595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.497 [2024-07-24 05:10:40.112664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:25.497 [2024-07-24 05:10:40.112695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.454 ms 00:21:25.497 [2024-07-24 05:10:40.112705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.128056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.128093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:25.757 [2024-07-24 05:10:40.128124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.309 ms 00:21:25.757 [2024-07-24 05:10:40.128134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.128880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.128927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:25.757 [2024-07-24 05:10:40.128943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:21:25.757 [2024-07-24 05:10:40.128959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.195701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.195783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:25.757 [2024-07-24 05:10:40.195824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.716 ms 00:21:25.757 [2024-07-24 05:10:40.195835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.208279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:25.757 [2024-07-24 05:10:40.211121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.211171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:25.757 [2024-07-24 05:10:40.211204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.183 ms 00:21:25.757 [2024-07-24 05:10:40.211215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.211354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.211376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:25.757 [2024-07-24 05:10:40.211391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:25.757 [2024-07-24 05:10:40.211407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.211502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.211520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:25.757 [2024-07-24 05:10:40.211533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:25.757 [2024-07-24 05:10:40.211545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.211578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.211594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:25.757 [2024-07-24 05:10:40.211606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:25.757 [2024-07-24 05:10:40.211617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.211697] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:25.757 [2024-07-24 05:10:40.211717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.211727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:25.757 [2024-07-24 05:10:40.211738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:25.757 [2024-07-24 05:10:40.211762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.243682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.243741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:25.757 [2024-07-24 05:10:40.243781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.893 ms 00:21:25.757 [2024-07-24 05:10:40.243795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.243926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.757 [2024-07-24 05:10:40.243946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:25.757 [2024-07-24 05:10:40.243959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:25.757 [2024-07-24 05:10:40.243986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.757 [2024-07-24 05:10:40.245327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.323 ms, result 0 00:22:11.018  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 67/1024 [MB] (22 MBps) Copying: 90/1024 [MB] (22 MBps) Copying: 112/1024 [MB] (22 MBps) Copying: 135/1024 [MB] (22 MBps) Copying: 157/1024 [MB] (22 MBps) Copying: 180/1024 [MB] (22 MBps) Copying: 202/1024 [MB] (22 MBps) Copying: 225/1024 [MB] (22 MBps) Copying: 248/1024 [MB] (22 MBps) Copying: 271/1024 [MB] (22 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 315/1024 [MB] (22 MBps) Copying: 338/1024 [MB] (22 MBps) Copying: 361/1024 [MB] (23 MBps) Copying: 384/1024 [MB] (23 MBps) Copying: 406/1024 [MB] (22 MBps) Copying: 429/1024 [MB] (22 MBps) Copying: 452/1024 [MB] (23 MBps) Copying: 475/1024 [MB] (23 MBps) Copying: 498/1024 [MB] (22 MBps) Copying: 520/1024 [MB] (22 MBps) Copying: 542/1024 [MB] (22 MBps) Copying: 564/1024 [MB] (21 MBps) Copying: 587/1024 [MB] (22 MBps) Copying: 610/1024 [MB] (23 MBps) Copying: 634/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (23 MBps) Copying: 703/1024 [MB] (22 MBps) Copying: 726/1024 [MB] (23 MBps) Copying: 750/1024 [MB] (23 MBps) Copying: 773/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (23 MBps) Copying: 820/1024 [MB] (23 MBps) Copying: 844/1024 [MB] (23 MBps) Copying: 866/1024 [MB] (22 MBps) Copying: 889/1024 [MB] (22 MBps) Copying: 912/1024 [MB] (22 MBps) Copying: 934/1024 [MB] (22 MBps) Copying: 957/1024 [MB] (22 MBps) Copying: 979/1024 [MB] (22 MBps) Copying: 1002/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-24 05:11:25.445349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.445413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.018 [2024-07-24 05:11:25.445449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.018 [2024-07-24 05:11:25.445459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.445487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.018 [2024-07-24 05:11:25.448888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.448918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.018 [2024-07-24 05:11:25.448938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:22:11.018 [2024-07-24 05:11:25.448948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.449152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.449169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.018 [2024-07-24 05:11:25.449180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:22:11.018 [2024-07-24 05:11:25.449190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.452536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.452743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.018 [2024-07-24 05:11:25.452892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:22:11.018 [2024-07-24 05:11:25.453017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.458987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.459124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.018 [2024-07-24 05:11:25.459252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:22:11.018 [2024-07-24 05:11:25.459390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.487374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.487549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.018 [2024-07-24 05:11:25.487692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.867 ms 00:22:11.018 [2024-07-24 05:11:25.487825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.504437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.504615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.018 [2024-07-24 05:11:25.504755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.488 ms 00:22:11.018 [2024-07-24 05:11:25.504804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.504974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.505034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.018 [2024-07-24 05:11:25.505074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:11.018 [2024-07-24 05:11:25.505194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.534134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.534340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:11.018 [2024-07-24 05:11:25.534464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.877 ms 00:22:11.018 [2024-07-24 05:11:25.534511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.563350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.563537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:11.018 [2024-07-24 05:11:25.563642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.785 ms 00:22:11.018 [2024-07-24 05:11:25.563688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.591353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.591546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.018 [2024-07-24 05:11:25.591710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.612 ms 00:22:11.018 [2024-07-24 05:11:25.591770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.618457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.018 [2024-07-24 05:11:25.618626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.018 [2024-07-24 05:11:25.618733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:22:11.018 [2024-07-24 05:11:25.618779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.018 [2024-07-24 05:11:25.618917] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.018 [2024-07-24 05:11:25.618975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.018 [2024-07-24 05:11:25.619568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.619882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.620991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.019 [2024-07-24 05:11:25.621052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.020 [2024-07-24 05:11:25.621062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.020 [2024-07-24 05:11:25.621073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.020 [2024-07-24 05:11:25.621093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.020 [2024-07-24 05:11:25.621111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7194b92d-8659-4940-8cfd-a816cffde944 00:22:11.020 [2024-07-24 05:11:25.621121] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.020 [2024-07-24 05:11:25.621131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.020 [2024-07-24 05:11:25.621140] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.020 [2024-07-24 05:11:25.621150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.020 [2024-07-24 05:11:25.621159] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.020 [2024-07-24 05:11:25.621169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.020 [2024-07-24 05:11:25.621178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.020 [2024-07-24 05:11:25.621187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.020 [2024-07-24 05:11:25.621196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.020 [2024-07-24 05:11:25.621207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.020 [2024-07-24 05:11:25.621221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.020 [2024-07-24 05:11:25.621232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:22:11.020 [2024-07-24 05:11:25.621242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.020 [2024-07-24 05:11:25.636927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.020 [2024-07-24 05:11:25.636967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.020 [2024-07-24 05:11:25.636997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.638 ms 00:22:11.020 [2024-07-24 05:11:25.637009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.020 [2024-07-24 05:11:25.637453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.020 [2024-07-24 05:11:25.637482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.020 [2024-07-24 05:11:25.637497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:11.020 [2024-07-24 05:11:25.637514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.673804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.673889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.279 [2024-07-24 05:11:25.673922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.673933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.674025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.674040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.279 [2024-07-24 05:11:25.674050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.674067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.674181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.674216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.279 [2024-07-24 05:11:25.674244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.674255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.674276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.674290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.279 [2024-07-24 05:11:25.674301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.674311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.756876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.756939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.279 [2024-07-24 05:11:25.756971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.756981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.831532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.831595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.279 [2024-07-24 05:11:25.831643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.831662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.831798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.831814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.279 [2024-07-24 05:11:25.831825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.279 [2024-07-24 05:11:25.831835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.279 [2024-07-24 05:11:25.831909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.279 [2024-07-24 05:11:25.831924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.279 [2024-07-24 05:11:25.831935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.280 [2024-07-24 05:11:25.832003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.280 [2024-07-24 05:11:25.832131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.280 [2024-07-24 05:11:25.832157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.280 [2024-07-24 05:11:25.832171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.280 [2024-07-24 05:11:25.832196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.280 [2024-07-24 05:11:25.832246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.280 [2024-07-24 05:11:25.832263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.280 [2024-07-24 05:11:25.832274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.280 [2024-07-24 05:11:25.832284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.280 [2024-07-24 05:11:25.832380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.280 [2024-07-24 05:11:25.832418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.280 [2024-07-24 05:11:25.832430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.280 [2024-07-24 05:11:25.832441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.280 [2024-07-24 05:11:25.832506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.280 [2024-07-24 05:11:25.832522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.280 [2024-07-24 05:11:25.832533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.280 [2024-07-24 05:11:25.832543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.280 [2024-07-24 05:11:25.832675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.307 ms, result 0 00:22:12.216 00:22:12.216 00:22:12.217 05:11:26 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:14.121 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:14.121 05:11:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:14.380 [2024-07-24 05:11:28.803130] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:14.380 [2024-07-24 05:11:28.803563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82028 ] 00:22:14.380 [2024-07-24 05:11:28.966649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.638 [2024-07-24 05:11:29.166172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.897 [2024-07-24 05:11:29.429200] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.897 [2024-07-24 05:11:29.429301] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.157 [2024-07-24 05:11:29.589160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.589255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.157 [2024-07-24 05:11:29.589305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:15.157 [2024-07-24 05:11:29.589316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.589377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.589395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.157 [2024-07-24 05:11:29.589406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:15.157 [2024-07-24 05:11:29.589420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.589452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.157 [2024-07-24 05:11:29.590352] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.157 [2024-07-24 05:11:29.590385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.590399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.157 [2024-07-24 05:11:29.590410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:22:15.157 [2024-07-24 05:11:29.590420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.591820] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:15.157 [2024-07-24 05:11:29.606634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.606675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:15.157 [2024-07-24 05:11:29.606708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.816 ms 00:22:15.157 [2024-07-24 05:11:29.606718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.606787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.606808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:15.157 [2024-07-24 05:11:29.606819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:15.157 [2024-07-24 05:11:29.606829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.611513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.611555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.157 [2024-07-24 05:11:29.611586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:22:15.157 [2024-07-24 05:11:29.611597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.611702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.611719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.157 [2024-07-24 05:11:29.611730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:15.157 [2024-07-24 05:11:29.611740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.611802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.611818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.157 [2024-07-24 05:11:29.611830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:15.157 [2024-07-24 05:11:29.611839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.611905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.157 [2024-07-24 05:11:29.615786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.615821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.157 [2024-07-24 05:11:29.615850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:22:15.157 [2024-07-24 05:11:29.615891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.615933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.615949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.157 [2024-07-24 05:11:29.615960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:15.157 [2024-07-24 05:11:29.615969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.157 [2024-07-24 05:11:29.616012] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:15.157 [2024-07-24 05:11:29.616041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:15.157 [2024-07-24 05:11:29.616081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:15.157 [2024-07-24 05:11:29.616102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:15.157 [2024-07-24 05:11:29.616194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.157 [2024-07-24 05:11:29.616209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.157 [2024-07-24 05:11:29.616236] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:15.157 [2024-07-24 05:11:29.616249] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.157 [2024-07-24 05:11:29.616261] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.157 [2024-07-24 05:11:29.616272] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:15.157 [2024-07-24 05:11:29.616281] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.157 [2024-07-24 05:11:29.616290] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.157 [2024-07-24 05:11:29.616299] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.157 [2024-07-24 05:11:29.616310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.157 [2024-07-24 05:11:29.616324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.157 [2024-07-24 05:11:29.616334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:22:15.157 [2024-07-24 05:11:29.616343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.616418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.158 [2024-07-24 05:11:29.616431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.158 [2024-07-24 05:11:29.616441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:15.158 [2024-07-24 05:11:29.616450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.616541] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.158 [2024-07-24 05:11:29.616557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.158 [2024-07-24 05:11:29.616572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.158 [2024-07-24 05:11:29.616601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.158 [2024-07-24 05:11:29.616629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.158 [2024-07-24 05:11:29.616647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.158 [2024-07-24 05:11:29.616656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:15.158 [2024-07-24 05:11:29.616665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.158 [2024-07-24 05:11:29.616673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.158 [2024-07-24 05:11:29.616685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:15.158 [2024-07-24 05:11:29.616693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.158 [2024-07-24 05:11:29.616712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.158 [2024-07-24 05:11:29.616749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.158 [2024-07-24 05:11:29.616777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.158 [2024-07-24 05:11:29.616803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.158 [2024-07-24 05:11:29.616829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.158 [2024-07-24 05:11:29.616846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.158 [2024-07-24 05:11:29.616855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.158 [2024-07-24 05:11:29.616912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.158 [2024-07-24 05:11:29.616923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:15.158 [2024-07-24 05:11:29.616932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.158 [2024-07-24 05:11:29.616941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.158 [2024-07-24 05:11:29.616950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:15.158 [2024-07-24 05:11:29.616959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.616968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.158 [2024-07-24 05:11:29.616978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:15.158 [2024-07-24 05:11:29.617004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.617014] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.158 [2024-07-24 05:11:29.617025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.158 [2024-07-24 05:11:29.617034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.158 [2024-07-24 05:11:29.617049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.158 [2024-07-24 05:11:29.617059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.158 [2024-07-24 05:11:29.617069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.158 [2024-07-24 05:11:29.617078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.158 [2024-07-24 05:11:29.617088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.158 [2024-07-24 05:11:29.617097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.158 [2024-07-24 05:11:29.617106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.158 [2024-07-24 05:11:29.617117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.158 [2024-07-24 05:11:29.617129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:15.158 [2024-07-24 05:11:29.617183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:15.158 [2024-07-24 05:11:29.617194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:15.158 [2024-07-24 05:11:29.617208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:15.158 [2024-07-24 05:11:29.617220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:15.158 [2024-07-24 05:11:29.617247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:15.158 [2024-07-24 05:11:29.617258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:15.158 [2024-07-24 05:11:29.617268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:15.158 [2024-07-24 05:11:29.617279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:15.158 [2024-07-24 05:11:29.617304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:15.158 [2024-07-24 05:11:29.617358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.158 [2024-07-24 05:11:29.617370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.158 [2024-07-24 05:11:29.617398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.158 [2024-07-24 05:11:29.617409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.158 [2024-07-24 05:11:29.617420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.158 [2024-07-24 05:11:29.617432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.158 [2024-07-24 05:11:29.617443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.158 [2024-07-24 05:11:29.617454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:22:15.158 [2024-07-24 05:11:29.617466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.653962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.158 [2024-07-24 05:11:29.654021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.158 [2024-07-24 05:11:29.654057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.411 ms 00:22:15.158 [2024-07-24 05:11:29.654068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.654181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.158 [2024-07-24 05:11:29.654197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:15.158 [2024-07-24 05:11:29.654208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:15.158 [2024-07-24 05:11:29.654219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.687458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.158 [2024-07-24 05:11:29.687532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.158 [2024-07-24 05:11:29.687567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.120 ms 00:22:15.158 [2024-07-24 05:11:29.687577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.158 [2024-07-24 05:11:29.687673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.687689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.159 [2024-07-24 05:11:29.687701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:15.159 [2024-07-24 05:11:29.687717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.688449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.688578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.159 [2024-07-24 05:11:29.688686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:22:15.159 [2024-07-24 05:11:29.688823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.689035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.689064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.159 [2024-07-24 05:11:29.689079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:22:15.159 [2024-07-24 05:11:29.689104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.703017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.703054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.159 [2024-07-24 05:11:29.703084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.882 ms 00:22:15.159 [2024-07-24 05:11:29.703099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.716997] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:15.159 [2024-07-24 05:11:29.717038] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:15.159 [2024-07-24 05:11:29.717071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.717082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:15.159 [2024-07-24 05:11:29.717094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.849 ms 00:22:15.159 [2024-07-24 05:11:29.717104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.745245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.745318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:15.159 [2024-07-24 05:11:29.745349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.097 ms 00:22:15.159 [2024-07-24 05:11:29.745359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.758440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.758491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:15.159 [2024-07-24 05:11:29.758522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.054 ms 00:22:15.159 [2024-07-24 05:11:29.758532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.771829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.771890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:15.159 [2024-07-24 05:11:29.771920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.259 ms 00:22:15.159 [2024-07-24 05:11:29.771930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.159 [2024-07-24 05:11:29.772661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.159 [2024-07-24 05:11:29.772698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:15.159 [2024-07-24 05:11:29.772713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:22:15.159 [2024-07-24 05:11:29.772723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.836142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.836207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:15.418 [2024-07-24 05:11:29.836241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.391 ms 00:22:15.418 [2024-07-24 05:11:29.836259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.847229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:15.418 [2024-07-24 05:11:29.849935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.849969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:15.418 [2024-07-24 05:11:29.850003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.597 ms 00:22:15.418 [2024-07-24 05:11:29.850013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.850127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.850148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:15.418 [2024-07-24 05:11:29.850161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:15.418 [2024-07-24 05:11:29.850186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.850302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.850319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:15.418 [2024-07-24 05:11:29.850330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:15.418 [2024-07-24 05:11:29.850340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.850368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.850381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:15.418 [2024-07-24 05:11:29.850391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.418 [2024-07-24 05:11:29.850401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.850432] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:15.418 [2024-07-24 05:11:29.850446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.418 [2024-07-24 05:11:29.850459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:15.418 [2024-07-24 05:11:29.850469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:15.418 [2024-07-24 05:11:29.850479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.418 [2024-07-24 05:11:29.877526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.419 [2024-07-24 05:11:29.877733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:15.419 [2024-07-24 05:11:29.877865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.022 ms 00:22:15.419 [2024-07-24 05:11:29.877924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.419 [2024-07-24 05:11:29.878096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.419 [2024-07-24 05:11:29.878153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:15.419 [2024-07-24 05:11:29.878336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:15.419 [2024-07-24 05:11:29.878384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.419 [2024-07-24 05:11:29.879696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.928 ms, result 0 00:22:59.515  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (22 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (23 MBps) Copying: 258/1024 [MB] (23 MBps) Copying: 282/1024 [MB] (23 MBps) Copying: 305/1024 [MB] (23 MBps) Copying: 329/1024 [MB] (24 MBps) Copying: 353/1024 [MB] (24 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (23 MBps) Copying: 425/1024 [MB] (24 MBps) Copying: 449/1024 [MB] (23 MBps) Copying: 472/1024 [MB] (23 MBps) Copying: 496/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 544/1024 [MB] (24 MBps) Copying: 568/1024 [MB] (24 MBps) Copying: 593/1024 [MB] (24 MBps) Copying: 616/1024 [MB] (23 MBps) Copying: 640/1024 [MB] (23 MBps) Copying: 664/1024 [MB] (24 MBps) Copying: 687/1024 [MB] (23 MBps) Copying: 711/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (24 MBps) Copying: 760/1024 [MB] (24 MBps) Copying: 785/1024 [MB] (24 MBps) Copying: 810/1024 [MB] (25 MBps) Copying: 834/1024 [MB] (23 MBps) Copying: 857/1024 [MB] (23 MBps) Copying: 881/1024 [MB] (23 MBps) Copying: 905/1024 [MB] (23 MBps) Copying: 930/1024 [MB] (25 MBps) Copying: 954/1024 [MB] (23 MBps) Copying: 977/1024 [MB] (23 MBps) Copying: 1003/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:12:13.837122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.837195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:59.515 [2024-07-24 05:12:13.837218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.515 [2024-07-24 05:12:13.837230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.838336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:59.515 [2024-07-24 05:12:13.844031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.844074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:59.515 [2024-07-24 05:12:13.844092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.651 ms 00:22:59.515 [2024-07-24 05:12:13.844104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.857814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.857890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:59.515 [2024-07-24 05:12:13.857911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.184 ms 00:22:59.515 [2024-07-24 05:12:13.857923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.877678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.877728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:59.515 [2024-07-24 05:12:13.877748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.721 ms 00:22:59.515 [2024-07-24 05:12:13.877761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.884562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.884598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:59.515 [2024-07-24 05:12:13.884629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.750 ms 00:22:59.515 [2024-07-24 05:12:13.884640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.916424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.916470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:59.515 [2024-07-24 05:12:13.916505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.705 ms 00:22:59.515 [2024-07-24 05:12:13.916517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:13.934533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:13.934583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:59.515 [2024-07-24 05:12:13.934617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:22:59.515 [2024-07-24 05:12:13.934630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:14.035755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:14.035816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:59.515 [2024-07-24 05:12:14.035849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.074 ms 00:22:59.515 [2024-07-24 05:12:14.035865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:14.067449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:14.067491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:59.515 [2024-07-24 05:12:14.067508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.560 ms 00:22:59.515 [2024-07-24 05:12:14.067520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:14.098464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:14.098503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:59.515 [2024-07-24 05:12:14.098535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.897 ms 00:22:59.515 [2024-07-24 05:12:14.098546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.515 [2024-07-24 05:12:14.129944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.515 [2024-07-24 05:12:14.129997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:59.515 [2024-07-24 05:12:14.130030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.354 ms 00:22:59.515 [2024-07-24 05:12:14.130042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.775 [2024-07-24 05:12:14.162266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.775 [2024-07-24 05:12:14.162306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:59.775 [2024-07-24 05:12:14.162338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.128 ms 00:22:59.775 [2024-07-24 05:12:14.162349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.775 [2024-07-24 05:12:14.162393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:59.775 [2024-07-24 05:12:14.162416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:22:59.775 [2024-07-24 05:12:14.162430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:59.775 [2024-07-24 05:12:14.162781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.162994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:59.776 [2024-07-24 05:12:14.163699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:59.776 [2024-07-24 05:12:14.163711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7194b92d-8659-4940-8cfd-a816cffde944 00:22:59.776 [2024-07-24 05:12:14.163723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:22:59.776 [2024-07-24 05:12:14.163734] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:22:59.776 [2024-07-24 05:12:14.163745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:22:59.776 [2024-07-24 05:12:14.163761] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:22:59.776 [2024-07-24 05:12:14.163772] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:59.776 [2024-07-24 05:12:14.163784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:59.776 [2024-07-24 05:12:14.163798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:59.776 [2024-07-24 05:12:14.163808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:59.776 [2024-07-24 05:12:14.163818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:59.776 [2024-07-24 05:12:14.163829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.776 [2024-07-24 05:12:14.163851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:59.776 [2024-07-24 05:12:14.163865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:22:59.776 [2024-07-24 05:12:14.163877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.776 [2024-07-24 05:12:14.180877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.776 [2024-07-24 05:12:14.180924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:59.776 [2024-07-24 05:12:14.180954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:22:59.776 [2024-07-24 05:12:14.180966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.776 [2024-07-24 05:12:14.181416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.776 [2024-07-24 05:12:14.181445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:59.776 [2024-07-24 05:12:14.181459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:22:59.776 [2024-07-24 05:12:14.181470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.218585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.218628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.777 [2024-07-24 05:12:14.218671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.218681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.218743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.218758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.777 [2024-07-24 05:12:14.218769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.218780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.218905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.218930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.777 [2024-07-24 05:12:14.218943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.218968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.218991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.219004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.777 [2024-07-24 05:12:14.219015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.219026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.313983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.314041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.777 [2024-07-24 05:12:14.314076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.314094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.392310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.392367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.777 [2024-07-24 05:12:14.392401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.392412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.392484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.392500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.777 [2024-07-24 05:12:14.392511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.392521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.392614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.392647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.777 [2024-07-24 05:12:14.392658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.392669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.392780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.392804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.777 [2024-07-24 05:12:14.392816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.392828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.392927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.392963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:59.777 [2024-07-24 05:12:14.392976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.392987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.393032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.393048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.777 [2024-07-24 05:12:14.393060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.393071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.393131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.777 [2024-07-24 05:12:14.393149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.777 [2024-07-24 05:12:14.393162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.777 [2024-07-24 05:12:14.393173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.777 [2024-07-24 05:12:14.393350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.338 ms, result 0 00:23:01.714 00:23:01.714 00:23:01.714 05:12:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:01.714 [2024-07-24 05:12:15.984619] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:01.714 [2024-07-24 05:12:15.984792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82495 ] 00:23:01.714 [2024-07-24 05:12:16.154887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.714 [2024-07-24 05:12:16.324979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.282 [2024-07-24 05:12:16.625175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.282 [2024-07-24 05:12:16.625294] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.282 [2024-07-24 05:12:16.782099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.782160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.282 [2024-07-24 05:12:16.782195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.282 [2024-07-24 05:12:16.782205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.782264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.782280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.282 [2024-07-24 05:12:16.782291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:02.282 [2024-07-24 05:12:16.782305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.782335] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.282 [2024-07-24 05:12:16.783184] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.282 [2024-07-24 05:12:16.783217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.783246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.282 [2024-07-24 05:12:16.783269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:23:02.282 [2024-07-24 05:12:16.783279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.784478] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.282 [2024-07-24 05:12:16.798858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.798912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.282 [2024-07-24 05:12:16.798945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.382 ms 00:23:02.282 [2024-07-24 05:12:16.798956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.799039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.799060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.282 [2024-07-24 05:12:16.799081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:02.282 [2024-07-24 05:12:16.799092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.803762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.803986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.282 [2024-07-24 05:12:16.804110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:23:02.282 [2024-07-24 05:12:16.804160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.804291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.804369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.282 [2024-07-24 05:12:16.804433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:02.282 [2024-07-24 05:12:16.804470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.804557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.804666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.282 [2024-07-24 05:12:16.804720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:02.282 [2024-07-24 05:12:16.804756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.804816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.282 [2024-07-24 05:12:16.808869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.809070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.282 [2024-07-24 05:12:16.809112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.062 ms 00:23:02.282 [2024-07-24 05:12:16.809124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.809184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.282 [2024-07-24 05:12:16.809201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.282 [2024-07-24 05:12:16.809214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:02.282 [2024-07-24 05:12:16.809225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.282 [2024-07-24 05:12:16.809317] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.282 [2024-07-24 05:12:16.809353] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.282 [2024-07-24 05:12:16.809410] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.282 [2024-07-24 05:12:16.809432] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:02.282 [2024-07-24 05:12:16.809533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.283 [2024-07-24 05:12:16.809549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.283 [2024-07-24 05:12:16.809564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:02.283 [2024-07-24 05:12:16.809593] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.283 [2024-07-24 05:12:16.809620] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.283 [2024-07-24 05:12:16.809630] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.283 [2024-07-24 05:12:16.809640] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.283 [2024-07-24 05:12:16.809650] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.283 [2024-07-24 05:12:16.809659] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.283 [2024-07-24 05:12:16.809670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.283 [2024-07-24 05:12:16.809683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.283 [2024-07-24 05:12:16.809694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:23:02.283 [2024-07-24 05:12:16.809704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.283 [2024-07-24 05:12:16.809787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.283 [2024-07-24 05:12:16.809801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.283 [2024-07-24 05:12:16.809812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:02.283 [2024-07-24 05:12:16.809821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.283 [2024-07-24 05:12:16.809975] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.283 [2024-07-24 05:12:16.809995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.283 [2024-07-24 05:12:16.810013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.283 [2024-07-24 05:12:16.810054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.283 [2024-07-24 05:12:16.810084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.283 [2024-07-24 05:12:16.810103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.283 [2024-07-24 05:12:16.810113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.283 [2024-07-24 05:12:16.810122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.283 [2024-07-24 05:12:16.810132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.283 [2024-07-24 05:12:16.810142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.283 [2024-07-24 05:12:16.810152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.283 [2024-07-24 05:12:16.810172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.283 [2024-07-24 05:12:16.810212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.283 [2024-07-24 05:12:16.810257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.283 [2024-07-24 05:12:16.810333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.283 [2024-07-24 05:12:16.810378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.283 [2024-07-24 05:12:16.810408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.283 [2024-07-24 05:12:16.810427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.283 [2024-07-24 05:12:16.810437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.283 [2024-07-24 05:12:16.810447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.283 [2024-07-24 05:12:16.810456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.283 [2024-07-24 05:12:16.810467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.283 [2024-07-24 05:12:16.810476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.283 [2024-07-24 05:12:16.810496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.283 [2024-07-24 05:12:16.810506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810515] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.283 [2024-07-24 05:12:16.810526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.283 [2024-07-24 05:12:16.810537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.283 [2024-07-24 05:12:16.810563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.283 [2024-07-24 05:12:16.810576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.283 [2024-07-24 05:12:16.810586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.283 [2024-07-24 05:12:16.810596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.283 [2024-07-24 05:12:16.810605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.283 [2024-07-24 05:12:16.810615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.283 [2024-07-24 05:12:16.810627] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.283 [2024-07-24 05:12:16.810641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.283 [2024-07-24 05:12:16.810664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.283 [2024-07-24 05:12:16.810675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.283 [2024-07-24 05:12:16.810686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.283 [2024-07-24 05:12:16.810697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.283 [2024-07-24 05:12:16.810708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.283 [2024-07-24 05:12:16.810719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.283 [2024-07-24 05:12:16.810730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.283 [2024-07-24 05:12:16.810741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.283 [2024-07-24 05:12:16.810752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.283 [2024-07-24 05:12:16.810807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.283 [2024-07-24 05:12:16.810819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.283 [2024-07-24 05:12:16.810863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.283 [2024-07-24 05:12:16.810874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.283 [2024-07-24 05:12:16.810885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.283 [2024-07-24 05:12:16.810897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.283 [2024-07-24 05:12:16.810908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.283 [2024-07-24 05:12:16.810920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:23:02.283 [2024-07-24 05:12:16.810931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.283 [2024-07-24 05:12:16.847919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.283 [2024-07-24 05:12:16.847981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.284 [2024-07-24 05:12:16.848016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.914 ms 00:23:02.284 [2024-07-24 05:12:16.848026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.848138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.848153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.284 [2024-07-24 05:12:16.848165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:02.284 [2024-07-24 05:12:16.848174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.882462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.882519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.284 [2024-07-24 05:12:16.882553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.191 ms 00:23:02.284 [2024-07-24 05:12:16.882563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.882630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.882646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.284 [2024-07-24 05:12:16.882657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.284 [2024-07-24 05:12:16.882673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.883124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.883143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.284 [2024-07-24 05:12:16.883156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:23:02.284 [2024-07-24 05:12:16.883166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.883401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.883429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.284 [2024-07-24 05:12:16.883442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:23:02.284 [2024-07-24 05:12:16.883452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.284 [2024-07-24 05:12:16.898742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.284 [2024-07-24 05:12:16.898785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.284 [2024-07-24 05:12:16.898816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.258 ms 00:23:02.284 [2024-07-24 05:12:16.898849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.542 [2024-07-24 05:12:16.914692] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:02.543 [2024-07-24 05:12:16.914733] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.543 [2024-07-24 05:12:16.914766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:16.914778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.543 [2024-07-24 05:12:16.914789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.728 ms 00:23:02.543 [2024-07-24 05:12:16.914799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:16.944817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:16.944922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.543 [2024-07-24 05:12:16.944942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.937 ms 00:23:02.543 [2024-07-24 05:12:16.944955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:16.960042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:16.960079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.543 [2024-07-24 05:12:16.960110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.988 ms 00:23:02.543 [2024-07-24 05:12:16.960121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:16.973753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:16.973806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.543 [2024-07-24 05:12:16.973837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.592 ms 00:23:02.543 [2024-07-24 05:12:16.973846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:16.974739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:16.974777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.543 [2024-07-24 05:12:16.974808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:23:02.543 [2024-07-24 05:12:16.974818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.038251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.038314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.543 [2024-07-24 05:12:17.038349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.404 ms 00:23:02.543 [2024-07-24 05:12:17.038367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.050289] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.543 [2024-07-24 05:12:17.052921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.052953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.543 [2024-07-24 05:12:17.052984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.479 ms 00:23:02.543 [2024-07-24 05:12:17.052994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.053098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.053117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.543 [2024-07-24 05:12:17.053130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.543 [2024-07-24 05:12:17.053140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.054677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.054711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.543 [2024-07-24 05:12:17.054741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:23:02.543 [2024-07-24 05:12:17.054751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.054783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.054797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.543 [2024-07-24 05:12:17.054808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.543 [2024-07-24 05:12:17.054818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.054881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.543 [2024-07-24 05:12:17.054915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.054931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.543 [2024-07-24 05:12:17.054942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:02.543 [2024-07-24 05:12:17.054952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.082458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.082497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.543 [2024-07-24 05:12:17.082528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.482 ms 00:23:02.543 [2024-07-24 05:12:17.082545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.082619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.543 [2024-07-24 05:12:17.082636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.543 [2024-07-24 05:12:17.082647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:02.543 [2024-07-24 05:12:17.082656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.543 [2024-07-24 05:12:17.090145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.968 ms, result 0 00:23:45.971  Copying: 22/1024 [MB] (22 MBps) Copying: 46/1024 [MB] (24 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (24 MBps) Copying: 236/1024 [MB] (24 MBps) Copying: 261/1024 [MB] (24 MBps) Copying: 284/1024 [MB] (23 MBps) Copying: 309/1024 [MB] (24 MBps) Copying: 333/1024 [MB] (23 MBps) Copying: 356/1024 [MB] (23 MBps) Copying: 380/1024 [MB] (23 MBps) Copying: 404/1024 [MB] (23 MBps) Copying: 428/1024 [MB] (24 MBps) Copying: 451/1024 [MB] (23 MBps) Copying: 475/1024 [MB] (24 MBps) Copying: 499/1024 [MB] (23 MBps) Copying: 523/1024 [MB] (23 MBps) Copying: 546/1024 [MB] (23 MBps) Copying: 569/1024 [MB] (22 MBps) Copying: 591/1024 [MB] (22 MBps) Copying: 615/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (23 MBps) Copying: 662/1024 [MB] (23 MBps) Copying: 685/1024 [MB] (23 MBps) Copying: 708/1024 [MB] (23 MBps) Copying: 733/1024 [MB] (24 MBps) Copying: 758/1024 [MB] (24 MBps) Copying: 782/1024 [MB] (24 MBps) Copying: 806/1024 [MB] (24 MBps) Copying: 830/1024 [MB] (24 MBps) Copying: 854/1024 [MB] (23 MBps) Copying: 879/1024 [MB] (24 MBps) Copying: 902/1024 [MB] (23 MBps) Copying: 926/1024 [MB] (23 MBps) Copying: 949/1024 [MB] (23 MBps) Copying: 973/1024 [MB] (24 MBps) Copying: 998/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:13:00.498165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.498237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:45.971 [2024-07-24 05:13:00.498273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.971 [2024-07-24 05:13:00.498285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.498330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:45.971 [2024-07-24 05:13:00.502400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.502571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:45.971 [2024-07-24 05:13:00.502683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.046 ms 00:23:45.971 [2024-07-24 05:13:00.502807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.503140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.503207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:45.971 [2024-07-24 05:13:00.503442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:23:45.971 [2024-07-24 05:13:00.503497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.508516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.508738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:45.971 [2024-07-24 05:13:00.508879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:23:45.971 [2024-07-24 05:13:00.508932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.515146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.515374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:45.971 [2024-07-24 05:13:00.515522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:23:45.971 [2024-07-24 05:13:00.515574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.545565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.545785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:45.971 [2024-07-24 05:13:00.545992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.837 ms 00:23:45.971 [2024-07-24 05:13:00.546045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.971 [2024-07-24 05:13:00.563828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.971 [2024-07-24 05:13:00.564053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:45.971 [2024-07-24 05:13:00.564190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.615 ms 00:23:45.971 [2024-07-24 05:13:00.564253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.683465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.229 [2024-07-24 05:13:00.683692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.229 [2024-07-24 05:13:00.683835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.032 ms 00:23:46.229 [2024-07-24 05:13:00.683963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.712917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.229 [2024-07-24 05:13:00.713106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:46.229 [2024-07-24 05:13:00.713228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.887 ms 00:23:46.229 [2024-07-24 05:13:00.713291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.741238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.229 [2024-07-24 05:13:00.741423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:46.229 [2024-07-24 05:13:00.741562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.768 ms 00:23:46.229 [2024-07-24 05:13:00.741610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.769401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.229 [2024-07-24 05:13:00.769588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:46.229 [2024-07-24 05:13:00.769613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.711 ms 00:23:46.229 [2024-07-24 05:13:00.769640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.798061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.229 [2024-07-24 05:13:00.798104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:46.229 [2024-07-24 05:13:00.798136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.320 ms 00:23:46.229 [2024-07-24 05:13:00.798147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.229 [2024-07-24 05:13:00.798218] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:46.229 [2024-07-24 05:13:00.798240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:23:46.229 [2024-07-24 05:13:00.798253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:46.229 [2024-07-24 05:13:00.798451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.798998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:46.230 [2024-07-24 05:13:00.799460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:46.230 [2024-07-24 05:13:00.799471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7194b92d-8659-4940-8cfd-a816cffde944 00:23:46.230 [2024-07-24 05:13:00.799483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:23:46.230 [2024-07-24 05:13:00.799493] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:23:46.230 [2024-07-24 05:13:00.799505] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:23:46.230 [2024-07-24 05:13:00.799524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:23:46.230 [2024-07-24 05:13:00.799535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:46.230 [2024-07-24 05:13:00.799546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:46.230 [2024-07-24 05:13:00.799561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:46.230 [2024-07-24 05:13:00.799572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:46.230 [2024-07-24 05:13:00.799582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:46.231 [2024-07-24 05:13:00.799592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.231 [2024-07-24 05:13:00.799604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:46.231 [2024-07-24 05:13:00.799616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:23:46.231 [2024-07-24 05:13:00.799627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.816259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.231 [2024-07-24 05:13:00.816296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:46.231 [2024-07-24 05:13:00.816326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:23:46.231 [2024-07-24 05:13:00.816349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.816727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.231 [2024-07-24 05:13:00.816743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:46.231 [2024-07-24 05:13:00.816755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:23:46.231 [2024-07-24 05:13:00.816765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.851269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.231 [2024-07-24 05:13:00.851529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.231 [2024-07-24 05:13:00.851679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.231 [2024-07-24 05:13:00.851743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.851948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.231 [2024-07-24 05:13:00.852114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.231 [2024-07-24 05:13:00.852139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.231 [2024-07-24 05:13:00.852151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.852270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.231 [2024-07-24 05:13:00.852288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.231 [2024-07-24 05:13:00.852300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.231 [2024-07-24 05:13:00.852317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.231 [2024-07-24 05:13:00.852338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.231 [2024-07-24 05:13:00.852350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.231 [2024-07-24 05:13:00.852362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.231 [2024-07-24 05:13:00.852372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:00.941835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:00.941899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.488 [2024-07-24 05:13:00.941933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:00.941949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.013984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.488 [2024-07-24 05:13:01.014072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.488 [2024-07-24 05:13:01.014207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.488 [2024-07-24 05:13:01.014285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.488 [2024-07-24 05:13:01.014427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:46.488 [2024-07-24 05:13:01.014511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.488 [2024-07-24 05:13:01.014585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.488 [2024-07-24 05:13:01.014659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.488 [2024-07-24 05:13:01.014669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.488 [2024-07-24 05:13:01.014678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.488 [2024-07-24 05:13:01.014798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.624 ms, result 0 00:23:47.449 00:23:47.449 00:23:47.449 05:13:02 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.352 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:49.352 05:13:03 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:49.352 05:13:03 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:49.352 05:13:03 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:49.610 05:13:04 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:49.611 Process with pid 80848 is not found 00:23:49.611 Remove shared memory files 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80848 00:23:49.611 05:13:04 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80848 ']' 00:23:49.611 05:13:04 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80848 00:23:49.611 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80848) - No such process 00:23:49.611 05:13:04 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 80848 is not found' 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:49.611 05:13:04 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:49.611 ************************************ 00:23:49.611 END TEST ftl_restore 00:23:49.611 ************************************ 00:23:49.611 00:23:49.611 real 3m29.606s 00:23:49.611 user 3m16.812s 00:23:49.611 sys 0m14.217s 00:23:49.611 05:13:04 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.611 05:13:04 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:49.611 05:13:04 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:49.611 05:13:04 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:49.611 05:13:04 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.611 05:13:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:49.611 ************************************ 00:23:49.611 START TEST ftl_dirty_shutdown 00:23:49.611 ************************************ 00:23:49.611 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:49.869 * Looking for test storage... 00:23:49.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:49.869 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83072 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83072 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83072 ']' 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.870 05:13:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:49.870 [2024-07-24 05:13:04.423356] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:49.870 [2024-07-24 05:13:04.423757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83072 ] 00:23:50.128 [2024-07-24 05:13:04.586565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.386 [2024-07-24 05:13:04.815551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:50.953 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:23:51.212 05:13:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:51.471 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:23:51.471 { 00:23:51.471 "name": "nvme0n1", 00:23:51.471 "aliases": [ 00:23:51.471 "c969d473-5199-40f5-9e4e-5115f0460a02" 00:23:51.471 ], 00:23:51.471 "product_name": "NVMe disk", 00:23:51.471 "block_size": 4096, 00:23:51.471 "num_blocks": 1310720, 00:23:51.471 "uuid": "c969d473-5199-40f5-9e4e-5115f0460a02", 00:23:51.471 "assigned_rate_limits": { 00:23:51.471 "rw_ios_per_sec": 0, 00:23:51.471 "rw_mbytes_per_sec": 0, 00:23:51.471 "r_mbytes_per_sec": 0, 00:23:51.471 "w_mbytes_per_sec": 0 00:23:51.471 }, 00:23:51.471 "claimed": true, 00:23:51.471 "claim_type": "read_many_write_one", 00:23:51.471 "zoned": false, 00:23:51.471 "supported_io_types": { 00:23:51.471 "read": true, 00:23:51.471 "write": true, 00:23:51.471 "unmap": true, 00:23:51.471 "flush": true, 00:23:51.471 "reset": true, 00:23:51.471 "nvme_admin": true, 00:23:51.471 "nvme_io": true, 00:23:51.471 "nvme_io_md": false, 00:23:51.471 "write_zeroes": true, 00:23:51.471 "zcopy": false, 00:23:51.471 "get_zone_info": false, 00:23:51.471 "zone_management": false, 00:23:51.471 "zone_append": false, 00:23:51.471 "compare": true, 00:23:51.471 "compare_and_write": false, 00:23:51.471 "abort": true, 00:23:51.471 "seek_hole": false, 00:23:51.471 "seek_data": false, 00:23:51.471 "copy": true, 00:23:51.471 "nvme_iov_md": false 00:23:51.471 }, 00:23:51.471 "driver_specific": { 00:23:51.471 "nvme": [ 00:23:51.471 { 00:23:51.471 "pci_address": "0000:00:11.0", 00:23:51.471 "trid": { 00:23:51.471 "trtype": "PCIe", 00:23:51.471 "traddr": "0000:00:11.0" 00:23:51.471 }, 00:23:51.471 "ctrlr_data": { 00:23:51.471 "cntlid": 0, 00:23:51.471 "vendor_id": "0x1b36", 00:23:51.471 "model_number": "QEMU NVMe Ctrl", 00:23:51.471 "serial_number": "12341", 00:23:51.471 "firmware_revision": "8.0.0", 00:23:51.471 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:51.471 "oacs": { 00:23:51.471 "security": 0, 00:23:51.471 "format": 1, 00:23:51.471 "firmware": 0, 00:23:51.471 "ns_manage": 1 00:23:51.471 }, 00:23:51.471 "multi_ctrlr": false, 00:23:51.471 "ana_reporting": false 00:23:51.471 }, 00:23:51.471 "vs": { 00:23:51.471 "nvme_version": "1.4" 00:23:51.471 }, 00:23:51.471 "ns_data": { 00:23:51.471 "id": 1, 00:23:51.471 "can_share": false 00:23:51.471 } 00:23:51.471 } 00:23:51.471 ], 00:23:51.471 "mp_policy": "active_passive" 00:23:51.471 } 00:23:51.471 } 00:23:51.471 ]' 00:23:51.471 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:23:51.471 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:23:51.471 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # nb=1310720 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # echo 5120 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:51.729 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:51.988 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=28360d4b-9040-48c1-9cec-cdb60e129cbc 00:23:51.988 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:51.988 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28360d4b-9040-48c1-9cec-cdb60e129cbc 00:23:52.247 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:52.247 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7fb6833e-7570-475f-a7e7-ef9b16cc08ed 00:23:52.247 05:13:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7fb6833e-7570-475f-a7e7-ef9b16cc08ed 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:23:52.816 { 00:23:52.816 "name": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:52.816 "aliases": [ 00:23:52.816 "lvs/nvme0n1p0" 00:23:52.816 ], 00:23:52.816 "product_name": "Logical Volume", 00:23:52.816 "block_size": 4096, 00:23:52.816 "num_blocks": 26476544, 00:23:52.816 "uuid": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:52.816 "assigned_rate_limits": { 00:23:52.816 "rw_ios_per_sec": 0, 00:23:52.816 "rw_mbytes_per_sec": 0, 00:23:52.816 "r_mbytes_per_sec": 0, 00:23:52.816 "w_mbytes_per_sec": 0 00:23:52.816 }, 00:23:52.816 "claimed": false, 00:23:52.816 "zoned": false, 00:23:52.816 "supported_io_types": { 00:23:52.816 "read": true, 00:23:52.816 "write": true, 00:23:52.816 "unmap": true, 00:23:52.816 "flush": false, 00:23:52.816 "reset": true, 00:23:52.816 "nvme_admin": false, 00:23:52.816 "nvme_io": false, 00:23:52.816 "nvme_io_md": false, 00:23:52.816 "write_zeroes": true, 00:23:52.816 "zcopy": false, 00:23:52.816 "get_zone_info": false, 00:23:52.816 "zone_management": false, 00:23:52.816 "zone_append": false, 00:23:52.816 "compare": false, 00:23:52.816 "compare_and_write": false, 00:23:52.816 "abort": false, 00:23:52.816 "seek_hole": true, 00:23:52.816 "seek_data": true, 00:23:52.816 "copy": false, 00:23:52.816 "nvme_iov_md": false 00:23:52.816 }, 00:23:52.816 "driver_specific": { 00:23:52.816 "lvol": { 00:23:52.816 "lvol_store_uuid": "7fb6833e-7570-475f-a7e7-ef9b16cc08ed", 00:23:52.816 "base_bdev": "nvme0n1", 00:23:52.816 "thin_provision": true, 00:23:52.816 "num_allocated_clusters": 0, 00:23:52.816 "snapshot": false, 00:23:52.816 "clone": false, 00:23:52.816 "esnap_clone": false 00:23:52.816 } 00:23:52.816 } 00:23:52.816 } 00:23:52.816 ]' 00:23:52.816 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # nb=26476544 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # echo 103424 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:53.075 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:23:53.334 05:13:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:23:53.594 { 00:23:53.594 "name": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:53.594 "aliases": [ 00:23:53.594 "lvs/nvme0n1p0" 00:23:53.594 ], 00:23:53.594 "product_name": "Logical Volume", 00:23:53.594 "block_size": 4096, 00:23:53.594 "num_blocks": 26476544, 00:23:53.594 "uuid": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:53.594 "assigned_rate_limits": { 00:23:53.594 "rw_ios_per_sec": 0, 00:23:53.594 "rw_mbytes_per_sec": 0, 00:23:53.594 "r_mbytes_per_sec": 0, 00:23:53.594 "w_mbytes_per_sec": 0 00:23:53.594 }, 00:23:53.594 "claimed": false, 00:23:53.594 "zoned": false, 00:23:53.594 "supported_io_types": { 00:23:53.594 "read": true, 00:23:53.594 "write": true, 00:23:53.594 "unmap": true, 00:23:53.594 "flush": false, 00:23:53.594 "reset": true, 00:23:53.594 "nvme_admin": false, 00:23:53.594 "nvme_io": false, 00:23:53.594 "nvme_io_md": false, 00:23:53.594 "write_zeroes": true, 00:23:53.594 "zcopy": false, 00:23:53.594 "get_zone_info": false, 00:23:53.594 "zone_management": false, 00:23:53.594 "zone_append": false, 00:23:53.594 "compare": false, 00:23:53.594 "compare_and_write": false, 00:23:53.594 "abort": false, 00:23:53.594 "seek_hole": true, 00:23:53.594 "seek_data": true, 00:23:53.594 "copy": false, 00:23:53.594 "nvme_iov_md": false 00:23:53.594 }, 00:23:53.594 "driver_specific": { 00:23:53.594 "lvol": { 00:23:53.594 "lvol_store_uuid": "7fb6833e-7570-475f-a7e7-ef9b16cc08ed", 00:23:53.594 "base_bdev": "nvme0n1", 00:23:53.594 "thin_provision": true, 00:23:53.594 "num_allocated_clusters": 0, 00:23:53.594 "snapshot": false, 00:23:53.594 "clone": false, 00:23:53.594 "esnap_clone": false 00:23:53.594 } 00:23:53.594 } 00:23:53.594 } 00:23:53.594 ]' 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # nb=26476544 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # echo 103424 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:53.594 05:13:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:23:53.853 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d7e8de4-955d-450d-b3bc-17081a99601b 00:23:54.112 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:23:54.112 { 00:23:54.112 "name": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:54.112 "aliases": [ 00:23:54.112 "lvs/nvme0n1p0" 00:23:54.112 ], 00:23:54.112 "product_name": "Logical Volume", 00:23:54.112 "block_size": 4096, 00:23:54.112 "num_blocks": 26476544, 00:23:54.112 "uuid": "6d7e8de4-955d-450d-b3bc-17081a99601b", 00:23:54.112 "assigned_rate_limits": { 00:23:54.112 "rw_ios_per_sec": 0, 00:23:54.112 "rw_mbytes_per_sec": 0, 00:23:54.112 "r_mbytes_per_sec": 0, 00:23:54.112 "w_mbytes_per_sec": 0 00:23:54.112 }, 00:23:54.112 "claimed": false, 00:23:54.112 "zoned": false, 00:23:54.112 "supported_io_types": { 00:23:54.112 "read": true, 00:23:54.112 "write": true, 00:23:54.112 "unmap": true, 00:23:54.112 "flush": false, 00:23:54.112 "reset": true, 00:23:54.112 "nvme_admin": false, 00:23:54.112 "nvme_io": false, 00:23:54.112 "nvme_io_md": false, 00:23:54.112 "write_zeroes": true, 00:23:54.112 "zcopy": false, 00:23:54.112 "get_zone_info": false, 00:23:54.112 "zone_management": false, 00:23:54.112 "zone_append": false, 00:23:54.112 "compare": false, 00:23:54.112 "compare_and_write": false, 00:23:54.112 "abort": false, 00:23:54.112 "seek_hole": true, 00:23:54.112 "seek_data": true, 00:23:54.112 "copy": false, 00:23:54.112 "nvme_iov_md": false 00:23:54.112 }, 00:23:54.112 "driver_specific": { 00:23:54.112 "lvol": { 00:23:54.112 "lvol_store_uuid": "7fb6833e-7570-475f-a7e7-ef9b16cc08ed", 00:23:54.112 "base_bdev": "nvme0n1", 00:23:54.112 "thin_provision": true, 00:23:54.112 "num_allocated_clusters": 0, 00:23:54.112 "snapshot": false, 00:23:54.112 "clone": false, 00:23:54.112 "esnap_clone": false 00:23:54.112 } 00:23:54.112 } 00:23:54.112 } 00:23:54.112 ]' 00:23:54.113 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # nb=26476544 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # echo 103424 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6d7e8de4-955d-450d-b3bc-17081a99601b --l2p_dram_limit 10' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:54.372 05:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d7e8de4-955d-450d-b3bc-17081a99601b --l2p_dram_limit 10 -c nvc0n1p0 00:23:54.658 [2024-07-24 05:13:09.017381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.017446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:54.658 [2024-07-24 05:13:09.017470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:54.658 [2024-07-24 05:13:09.017493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.017576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.017599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.658 [2024-07-24 05:13:09.017613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:54.658 [2024-07-24 05:13:09.017628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.017660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:54.658 [2024-07-24 05:13:09.018709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:54.658 [2024-07-24 05:13:09.018751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.018789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.658 [2024-07-24 05:13:09.018804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:23:54.658 [2024-07-24 05:13:09.018819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.018969] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d94c5153-45fe-4502-98b6-72c9a7f5f943 00:23:54.658 [2024-07-24 05:13:09.020108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.020150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:54.658 [2024-07-24 05:13:09.020203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:54.658 [2024-07-24 05:13:09.020232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.025357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.025419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.658 [2024-07-24 05:13:09.025486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:23:54.658 [2024-07-24 05:13:09.025498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.025635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.025655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.658 [2024-07-24 05:13:09.025671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:54.658 [2024-07-24 05:13:09.025682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.025769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.025787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:54.658 [2024-07-24 05:13:09.025899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:54.658 [2024-07-24 05:13:09.025912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.025969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.658 [2024-07-24 05:13:09.030952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.030997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.658 [2024-07-24 05:13:09.031015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:23:54.658 [2024-07-24 05:13:09.031030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.031079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.658 [2024-07-24 05:13:09.031100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:54.658 [2024-07-24 05:13:09.031113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:54.658 [2024-07-24 05:13:09.031128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.658 [2024-07-24 05:13:09.031189] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:54.658 [2024-07-24 05:13:09.031396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:54.658 [2024-07-24 05:13:09.031418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:54.659 [2024-07-24 05:13:09.031440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:54.659 [2024-07-24 05:13:09.031458] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:54.659 [2024-07-24 05:13:09.031475] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:54.659 [2024-07-24 05:13:09.031489] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:54.659 [2024-07-24 05:13:09.031509] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:54.659 [2024-07-24 05:13:09.031523] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:54.659 [2024-07-24 05:13:09.031537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:54.659 [2024-07-24 05:13:09.031550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.659 [2024-07-24 05:13:09.031565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:54.659 [2024-07-24 05:13:09.031579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:23:54.659 [2024-07-24 05:13:09.031593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.659 [2024-07-24 05:13:09.031692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.659 [2024-07-24 05:13:09.031711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:54.659 [2024-07-24 05:13:09.031725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:54.659 [2024-07-24 05:13:09.031757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.659 [2024-07-24 05:13:09.031896] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:54.659 [2024-07-24 05:13:09.031922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:54.659 [2024-07-24 05:13:09.031949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.659 [2024-07-24 05:13:09.031966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.031979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:54.659 [2024-07-24 05:13:09.031993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:54.659 [2024-07-24 05:13:09.032032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.659 [2024-07-24 05:13:09.032058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:54.659 [2024-07-24 05:13:09.032074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:54.659 [2024-07-24 05:13:09.032086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.659 [2024-07-24 05:13:09.032100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:54.659 [2024-07-24 05:13:09.032113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:54.659 [2024-07-24 05:13:09.032126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:54.659 [2024-07-24 05:13:09.032155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:54.659 [2024-07-24 05:13:09.032193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:54.659 [2024-07-24 05:13:09.032233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:54.659 [2024-07-24 05:13:09.032284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:54.659 [2024-07-24 05:13:09.032324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:54.659 [2024-07-24 05:13:09.032360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.659 [2024-07-24 05:13:09.032388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:54.659 [2024-07-24 05:13:09.032402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:54.659 [2024-07-24 05:13:09.032416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.659 [2024-07-24 05:13:09.032431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:54.659 [2024-07-24 05:13:09.032443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:54.659 [2024-07-24 05:13:09.032457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:54.659 [2024-07-24 05:13:09.032482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:54.659 [2024-07-24 05:13:09.032493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032506] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:54.659 [2024-07-24 05:13:09.032519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:54.659 [2024-07-24 05:13:09.032533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.659 [2024-07-24 05:13:09.032560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:54.659 [2024-07-24 05:13:09.032573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:54.659 [2024-07-24 05:13:09.032588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:54.659 [2024-07-24 05:13:09.032600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:54.659 [2024-07-24 05:13:09.032614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:54.659 [2024-07-24 05:13:09.032626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:54.659 [2024-07-24 05:13:09.032644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:54.659 [2024-07-24 05:13:09.032662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:54.659 [2024-07-24 05:13:09.032691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:54.659 [2024-07-24 05:13:09.032705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:54.659 [2024-07-24 05:13:09.032717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:54.659 [2024-07-24 05:13:09.032731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:54.659 [2024-07-24 05:13:09.032744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:54.659 [2024-07-24 05:13:09.032759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:54.659 [2024-07-24 05:13:09.032772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:54.659 [2024-07-24 05:13:09.032786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:54.659 [2024-07-24 05:13:09.032798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:54.659 [2024-07-24 05:13:09.032916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:54.659 [2024-07-24 05:13:09.032931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:54.659 [2024-07-24 05:13:09.032960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:54.659 [2024-07-24 05:13:09.032975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:54.659 [2024-07-24 05:13:09.032988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:54.659 [2024-07-24 05:13:09.033004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.659 [2024-07-24 05:13:09.033017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:54.659 [2024-07-24 05:13:09.033032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:23:54.659 [2024-07-24 05:13:09.033045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.659 [2024-07-24 05:13:09.033104] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:54.659 [2024-07-24 05:13:09.033121] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:56.562 [2024-07-24 05:13:10.939093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:10.939162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:56.562 [2024-07-24 05:13:10.939201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1905.992 ms 00:23:56.562 [2024-07-24 05:13:10.939213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:10.969758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:10.969821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.562 [2024-07-24 05:13:10.969911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.274 ms 00:23:56.562 [2024-07-24 05:13:10.969927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:10.970141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:10.970159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:56.562 [2024-07-24 05:13:10.970179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:56.562 [2024-07-24 05:13:10.970191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.003578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.003637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.562 [2024-07-24 05:13:11.003675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.328 ms 00:23:56.562 [2024-07-24 05:13:11.003688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.003794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.003809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.562 [2024-07-24 05:13:11.003829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.562 [2024-07-24 05:13:11.003840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.004298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.004323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.562 [2024-07-24 05:13:11.004341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:23:56.562 [2024-07-24 05:13:11.004353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.004493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.004514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.562 [2024-07-24 05:13:11.004529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:23:56.562 [2024-07-24 05:13:11.004541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.020362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.020412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.562 [2024-07-24 05:13:11.020449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.774 ms 00:23:56.562 [2024-07-24 05:13:11.020461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.032201] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:56.562 [2024-07-24 05:13:11.035121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.035188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:56.562 [2024-07-24 05:13:11.035206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.548 ms 00:23:56.562 [2024-07-24 05:13:11.035219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.097208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.097295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:56.562 [2024-07-24 05:13:11.097315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.939 ms 00:23:56.562 [2024-07-24 05:13:11.097328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.097532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.097554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:56.562 [2024-07-24 05:13:11.097567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:23:56.562 [2024-07-24 05:13:11.097582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.125331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.125407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:56.562 [2024-07-24 05:13:11.125428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.684 ms 00:23:56.562 [2024-07-24 05:13:11.125445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.151997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.152071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:56.562 [2024-07-24 05:13:11.152091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.500 ms 00:23:56.562 [2024-07-24 05:13:11.152105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.562 [2024-07-24 05:13:11.152740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.562 [2024-07-24 05:13:11.152767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:56.562 [2024-07-24 05:13:11.152784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:23:56.562 [2024-07-24 05:13:11.152797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.233538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.233630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:56.822 [2024-07-24 05:13:11.233650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.615 ms 00:23:56.822 [2024-07-24 05:13:11.233667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.261655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.261741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:56.822 [2024-07-24 05:13:11.261761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.933 ms 00:23:56.822 [2024-07-24 05:13:11.261775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.289445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.289529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:56.822 [2024-07-24 05:13:11.289550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.616 ms 00:23:56.822 [2024-07-24 05:13:11.289563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.317253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.317328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:56.822 [2024-07-24 05:13:11.317348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.641 ms 00:23:56.822 [2024-07-24 05:13:11.317362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.317415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.317437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:56.822 [2024-07-24 05:13:11.317450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:56.822 [2024-07-24 05:13:11.317466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.317587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.822 [2024-07-24 05:13:11.317613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:56.822 [2024-07-24 05:13:11.317626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:56.822 [2024-07-24 05:13:11.317638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.822 [2024-07-24 05:13:11.318912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2300.886 ms, result 0 00:23:56.822 { 00:23:56.822 "name": "ftl0", 00:23:56.822 "uuid": "d94c5153-45fe-4502-98b6-72c9a7f5f943" 00:23:56.822 } 00:23:56.822 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:56.822 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:57.081 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:57.081 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:57.081 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:57.341 /dev/nbd0 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:57.341 1+0 records in 00:23:57.341 1+0 records out 00:23:57.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475017 s, 8.6 MB/s 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:23:57.341 05:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:57.600 [2024-07-24 05:13:12.013885] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:57.600 [2024-07-24 05:13:12.014062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83204 ] 00:23:57.600 [2024-07-24 05:13:12.188642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.859 [2024-07-24 05:13:12.417198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.618  Copying: 193/1024 [MB] (193 MBps) Copying: 389/1024 [MB] (195 MBps) Copying: 578/1024 [MB] (188 MBps) Copying: 766/1024 [MB] (188 MBps) Copying: 950/1024 [MB] (183 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:24:04.618 00:24:04.618 05:13:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:07.154 05:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:07.154 [2024-07-24 05:13:21.228020] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:24:07.154 [2024-07-24 05:13:21.229150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83298 ] 00:24:07.154 [2024-07-24 05:13:21.397698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.154 [2024-07-24 05:13:21.601131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.725  Copying: 14/1024 [MB] (14 MBps) Copying: 28/1024 [MB] (13 MBps) Copying: 41/1024 [MB] (12 MBps) Copying: 54/1024 [MB] (13 MBps) Copying: 67/1024 [MB] (13 MBps) Copying: 81/1024 [MB] (13 MBps) Copying: 94/1024 [MB] (13 MBps) Copying: 108/1024 [MB] (13 MBps) Copying: 121/1024 [MB] (13 MBps) Copying: 136/1024 [MB] (14 MBps) Copying: 151/1024 [MB] (14 MBps) Copying: 166/1024 [MB] (15 MBps) Copying: 181/1024 [MB] (14 MBps) Copying: 196/1024 [MB] (14 MBps) Copying: 211/1024 [MB] (14 MBps) Copying: 226/1024 [MB] (14 MBps) Copying: 241/1024 [MB] (14 MBps) Copying: 255/1024 [MB] (14 MBps) Copying: 270/1024 [MB] (14 MBps) Copying: 285/1024 [MB] (14 MBps) Copying: 299/1024 [MB] (14 MBps) Copying: 314/1024 [MB] (14 MBps) Copying: 329/1024 [MB] (14 MBps) Copying: 344/1024 [MB] (14 MBps) Copying: 359/1024 [MB] (14 MBps) Copying: 374/1024 [MB] (14 MBps) Copying: 389/1024 [MB] (14 MBps) Copying: 404/1024 [MB] (15 MBps) Copying: 419/1024 [MB] (14 MBps) Copying: 434/1024 [MB] (15 MBps) Copying: 449/1024 [MB] (15 MBps) Copying: 464/1024 [MB] (14 MBps) Copying: 479/1024 [MB] (14 MBps) Copying: 494/1024 [MB] (15 MBps) Copying: 509/1024 [MB] (14 MBps) Copying: 524/1024 [MB] (15 MBps) Copying: 539/1024 [MB] (15 MBps) Copying: 554/1024 [MB] (15 MBps) Copying: 570/1024 [MB] (15 MBps) Copying: 585/1024 [MB] (15 MBps) Copying: 600/1024 [MB] (14 MBps) Copying: 615/1024 [MB] (15 MBps) Copying: 630/1024 [MB] (14 MBps) Copying: 644/1024 [MB] (14 MBps) Copying: 659/1024 [MB] (14 MBps) Copying: 674/1024 [MB] (14 MBps) Copying: 689/1024 [MB] (15 MBps) Copying: 704/1024 [MB] (14 MBps) Copying: 719/1024 [MB] (14 MBps) Copying: 734/1024 [MB] (14 MBps) Copying: 749/1024 [MB] (14 MBps) Copying: 763/1024 [MB] (14 MBps) Copying: 778/1024 [MB] (14 MBps) Copying: 793/1024 [MB] (15 MBps) Copying: 808/1024 [MB] (14 MBps) Copying: 823/1024 [MB] (15 MBps) Copying: 838/1024 [MB] (15 MBps) Copying: 853/1024 [MB] (15 MBps) Copying: 868/1024 [MB] (15 MBps) Copying: 883/1024 [MB] (14 MBps) Copying: 898/1024 [MB] (15 MBps) Copying: 912/1024 [MB] (14 MBps) Copying: 928/1024 [MB] (15 MBps) Copying: 943/1024 [MB] (15 MBps) Copying: 958/1024 [MB] (14 MBps) Copying: 972/1024 [MB] (14 MBps) Copying: 987/1024 [MB] (14 MBps) Copying: 1002/1024 [MB] (15 MBps) Copying: 1017/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 14 MBps) 00:25:17.725 00:25:17.984 05:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:17.984 05:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:18.243 05:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:18.243 [2024-07-24 05:14:32.834367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.243 [2024-07-24 05:14:32.834436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:18.243 [2024-07-24 05:14:32.834492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:18.243 [2024-07-24 05:14:32.834506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.243 [2024-07-24 05:14:32.834564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:18.243 [2024-07-24 05:14:32.838013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.243 [2024-07-24 05:14:32.838055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:18.243 [2024-07-24 05:14:32.838074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:25:18.243 [2024-07-24 05:14:32.838091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.243 [2024-07-24 05:14:32.839974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.243 [2024-07-24 05:14:32.840030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:18.243 [2024-07-24 05:14:32.840066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:25:18.243 [2024-07-24 05:14:32.840099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.243 [2024-07-24 05:14:32.856353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.243 [2024-07-24 05:14:32.856413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:18.243 [2024-07-24 05:14:32.856432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.227 ms 00:25:18.243 [2024-07-24 05:14:32.856446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.243 [2024-07-24 05:14:32.862674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.243 [2024-07-24 05:14:32.862729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:18.243 [2024-07-24 05:14:32.862745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:25:18.243 [2024-07-24 05:14:32.862759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.892478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.892541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:18.503 [2024-07-24 05:14:32.892559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.633 ms 00:25:18.503 [2024-07-24 05:14:32.892573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.909906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.909992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:18.503 [2024-07-24 05:14:32.910011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:25:18.503 [2024-07-24 05:14:32.910026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.910220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.910246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:18.503 [2024-07-24 05:14:32.910259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:25:18.503 [2024-07-24 05:14:32.910273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.938125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.938193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:18.503 [2024-07-24 05:14:32.938226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.829 ms 00:25:18.503 [2024-07-24 05:14:32.938255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.965774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.965835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:18.503 [2024-07-24 05:14:32.965866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.475 ms 00:25:18.503 [2024-07-24 05:14:32.965901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.503 [2024-07-24 05:14:32.993521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.503 [2024-07-24 05:14:32.993584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:18.503 [2024-07-24 05:14:32.993601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.569 ms 00:25:18.503 [2024-07-24 05:14:32.993614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.504 [2024-07-24 05:14:33.020946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.504 [2024-07-24 05:14:33.021016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:18.504 [2024-07-24 05:14:33.021034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.224 ms 00:25:18.504 [2024-07-24 05:14:33.021047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.504 [2024-07-24 05:14:33.021094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:18.504 [2024-07-24 05:14:33.021123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.021997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:18.504 [2024-07-24 05:14:33.022311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:18.505 [2024-07-24 05:14:33.022525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:18.505 [2024-07-24 05:14:33.022536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94c5153-45fe-4502-98b6-72c9a7f5f943 00:25:18.505 [2024-07-24 05:14:33.022554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:18.505 [2024-07-24 05:14:33.022567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:18.505 [2024-07-24 05:14:33.022582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:18.505 [2024-07-24 05:14:33.022594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:18.505 [2024-07-24 05:14:33.022606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:18.505 [2024-07-24 05:14:33.022617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:18.505 [2024-07-24 05:14:33.022630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:18.505 [2024-07-24 05:14:33.022641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:18.505 [2024-07-24 05:14:33.022653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:18.505 [2024-07-24 05:14:33.022665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.505 [2024-07-24 05:14:33.022679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:18.505 [2024-07-24 05:14:33.022691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:25:18.505 [2024-07-24 05:14:33.022704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.037887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.505 [2024-07-24 05:14:33.037980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:18.505 [2024-07-24 05:14:33.038015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.121 ms 00:25:18.505 [2024-07-24 05:14:33.038031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.038528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.505 [2024-07-24 05:14:33.038564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:18.505 [2024-07-24 05:14:33.038580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:25:18.505 [2024-07-24 05:14:33.038595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.086503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.505 [2024-07-24 05:14:33.086580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.505 [2024-07-24 05:14:33.086599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.505 [2024-07-24 05:14:33.086613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.086694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.505 [2024-07-24 05:14:33.086711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.505 [2024-07-24 05:14:33.086723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.505 [2024-07-24 05:14:33.086736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.086931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.505 [2024-07-24 05:14:33.086956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.505 [2024-07-24 05:14:33.086979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.505 [2024-07-24 05:14:33.087008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.505 [2024-07-24 05:14:33.087037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.505 [2024-07-24 05:14:33.087057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.505 [2024-07-24 05:14:33.087070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.505 [2024-07-24 05:14:33.087084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.175002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.175083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.764 [2024-07-24 05:14:33.175102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.175116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.251524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.251598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.764 [2024-07-24 05:14:33.251630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.251645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.251760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.251789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.764 [2024-07-24 05:14:33.251817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.251831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.251962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.251991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.764 [2024-07-24 05:14:33.252006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.252020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.252174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.252200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.764 [2024-07-24 05:14:33.252217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.252231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.252299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.252328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:18.764 [2024-07-24 05:14:33.252342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.252356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.252403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.252437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.764 [2024-07-24 05:14:33.252468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.252499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.252556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.764 [2024-07-24 05:14:33.252581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.764 [2024-07-24 05:14:33.252595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.764 [2024-07-24 05:14:33.252609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.764 [2024-07-24 05:14:33.252769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 418.390 ms, result 0 00:25:18.764 true 00:25:18.764 05:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83072 00:25:18.765 05:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83072 00:25:18.765 05:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:18.765 [2024-07-24 05:14:33.384724] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:25:18.765 [2024-07-24 05:14:33.385022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84010 ] 00:25:19.022 [2024-07-24 05:14:33.554467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.279 [2024-07-24 05:14:33.725648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.633  Copying: 207/1024 [MB] (207 MBps) Copying: 411/1024 [MB] (204 MBps) Copying: 609/1024 [MB] (198 MBps) Copying: 802/1024 [MB] (193 MBps) Copying: 984/1024 [MB] (181 MBps) Copying: 1024/1024 [MB] (average 197 MBps) 00:25:25.633 00:25:25.633 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83072 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:25.633 05:14:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:25.892 [2024-07-24 05:14:40.270053] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:25:25.892 [2024-07-24 05:14:40.270495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84084 ] 00:25:25.892 [2024-07-24 05:14:40.441480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.151 [2024-07-24 05:14:40.604416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.411 [2024-07-24 05:14:40.884357] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:26.411 [2024-07-24 05:14:40.884712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:26.411 [2024-07-24 05:14:40.951261] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:26.411 [2024-07-24 05:14:40.951667] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:26.411 [2024-07-24 05:14:40.951886] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:26.671 [2024-07-24 05:14:41.221953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.222026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:26.671 [2024-07-24 05:14:41.222046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:26.671 [2024-07-24 05:14:41.222056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.222159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.222179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.671 [2024-07-24 05:14:41.222191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:26.671 [2024-07-24 05:14:41.222200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.222227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:26.671 [2024-07-24 05:14:41.223082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:26.671 [2024-07-24 05:14:41.223116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.223130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.671 [2024-07-24 05:14:41.223158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:25:26.671 [2024-07-24 05:14:41.223168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.224476] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:26.671 [2024-07-24 05:14:41.238765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.238820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:26.671 [2024-07-24 05:14:41.238896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.290 ms 00:25:26.671 [2024-07-24 05:14:41.238908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.239022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.239041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:26.671 [2024-07-24 05:14:41.239053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:26.671 [2024-07-24 05:14:41.239064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.243821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.243933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.671 [2024-07-24 05:14:41.243951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:25:26.671 [2024-07-24 05:14:41.243961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.244056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.244075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.671 [2024-07-24 05:14:41.244087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:26.671 [2024-07-24 05:14:41.244097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.671 [2024-07-24 05:14:41.244166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.671 [2024-07-24 05:14:41.244183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:26.671 [2024-07-24 05:14:41.244199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:26.672 [2024-07-24 05:14:41.244209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.244241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.672 [2024-07-24 05:14:41.248215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.248250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.672 [2024-07-24 05:14:41.248279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:25:26.672 [2024-07-24 05:14:41.248289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.248329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.248343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:26.672 [2024-07-24 05:14:41.248354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:26.672 [2024-07-24 05:14:41.248363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.248410] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:26.672 [2024-07-24 05:14:41.248439] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:26.672 [2024-07-24 05:14:41.248480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:26.672 [2024-07-24 05:14:41.248499] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:26.672 [2024-07-24 05:14:41.248589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:26.672 [2024-07-24 05:14:41.248602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:26.672 [2024-07-24 05:14:41.248616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:26.672 [2024-07-24 05:14:41.248628] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:26.672 [2024-07-24 05:14:41.248639] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:26.672 [2024-07-24 05:14:41.248654] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:26.672 [2024-07-24 05:14:41.248663] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:26.672 [2024-07-24 05:14:41.248673] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:26.672 [2024-07-24 05:14:41.248682] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:26.672 [2024-07-24 05:14:41.248692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.248702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:26.672 [2024-07-24 05:14:41.248712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:25:26.672 [2024-07-24 05:14:41.248721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.248795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.248808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:26.672 [2024-07-24 05:14:41.248821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:26.672 [2024-07-24 05:14:41.248830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.248959] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:26.672 [2024-07-24 05:14:41.248993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:26.672 [2024-07-24 05:14:41.249004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:26.672 [2024-07-24 05:14:41.249034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:26.672 [2024-07-24 05:14:41.249064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.672 [2024-07-24 05:14:41.249087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:26.672 [2024-07-24 05:14:41.249098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:26.672 [2024-07-24 05:14:41.249109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.672 [2024-07-24 05:14:41.249124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:26.672 [2024-07-24 05:14:41.249136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:26.672 [2024-07-24 05:14:41.249147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:26.672 [2024-07-24 05:14:41.249179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:26.672 [2024-07-24 05:14:41.249207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:26.672 [2024-07-24 05:14:41.249234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:26.672 [2024-07-24 05:14:41.249276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:26.672 [2024-07-24 05:14:41.249319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:26.672 [2024-07-24 05:14:41.249346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.672 [2024-07-24 05:14:41.249380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:26.672 [2024-07-24 05:14:41.249389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:26.672 [2024-07-24 05:14:41.249398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.672 [2024-07-24 05:14:41.249407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:26.672 [2024-07-24 05:14:41.249417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:26.672 [2024-07-24 05:14:41.249426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:26.672 [2024-07-24 05:14:41.249445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:26.672 [2024-07-24 05:14:41.249455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249464] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:26.672 [2024-07-24 05:14:41.249474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:26.672 [2024-07-24 05:14:41.249486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.672 [2024-07-24 05:14:41.249511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:26.672 [2024-07-24 05:14:41.249522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:26.672 [2024-07-24 05:14:41.249531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:26.672 [2024-07-24 05:14:41.249541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:26.672 [2024-07-24 05:14:41.249550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:26.672 [2024-07-24 05:14:41.249559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:26.672 [2024-07-24 05:14:41.249570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:26.672 [2024-07-24 05:14:41.249583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:26.672 [2024-07-24 05:14:41.249605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:26.672 [2024-07-24 05:14:41.249615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:26.672 [2024-07-24 05:14:41.249625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:26.672 [2024-07-24 05:14:41.249635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:26.672 [2024-07-24 05:14:41.249646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:26.672 [2024-07-24 05:14:41.249656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:26.672 [2024-07-24 05:14:41.249666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:26.672 [2024-07-24 05:14:41.249677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:26.672 [2024-07-24 05:14:41.249687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:26.672 [2024-07-24 05:14:41.249754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:26.672 [2024-07-24 05:14:41.249766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:26.672 [2024-07-24 05:14:41.249787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:26.672 [2024-07-24 05:14:41.249798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:26.672 [2024-07-24 05:14:41.249809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:26.672 [2024-07-24 05:14:41.249820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.249832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:26.672 [2024-07-24 05:14:41.249844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:25:26.672 [2024-07-24 05:14:41.249854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.292966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.293025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.672 [2024-07-24 05:14:41.293046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.990 ms 00:25:26.672 [2024-07-24 05:14:41.293058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.672 [2024-07-24 05:14:41.293174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.672 [2024-07-24 05:14:41.293191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:26.672 [2024-07-24 05:14:41.293223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:26.672 [2024-07-24 05:14:41.293233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.931 [2024-07-24 05:14:41.332435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.931 [2024-07-24 05:14:41.332494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.931 [2024-07-24 05:14:41.332529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.089 ms 00:25:26.931 [2024-07-24 05:14:41.332539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.931 [2024-07-24 05:14:41.332612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.931 [2024-07-24 05:14:41.332627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.932 [2024-07-24 05:14:41.332639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:26.932 [2024-07-24 05:14:41.332649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.333086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.333107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.932 [2024-07-24 05:14:41.333120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:25:26.932 [2024-07-24 05:14:41.333130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.333296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.333359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.932 [2024-07-24 05:14:41.333372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:25:26.932 [2024-07-24 05:14:41.333383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.348322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.348376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.932 [2024-07-24 05:14:41.348411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.911 ms 00:25:26.932 [2024-07-24 05:14:41.348421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.363175] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:26.932 [2024-07-24 05:14:41.363232] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:26.932 [2024-07-24 05:14:41.363269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.363280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:26.932 [2024-07-24 05:14:41.363294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.694 ms 00:25:26.932 [2024-07-24 05:14:41.363303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.391171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.391249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:26.932 [2024-07-24 05:14:41.391284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.809 ms 00:25:26.932 [2024-07-24 05:14:41.391295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.406610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.406677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:26.932 [2024-07-24 05:14:41.406714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.226 ms 00:25:26.932 [2024-07-24 05:14:41.406725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.422972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.423034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:26.932 [2024-07-24 05:14:41.423070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.163 ms 00:25:26.932 [2024-07-24 05:14:41.423111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.424016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.424071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:26.932 [2024-07-24 05:14:41.424087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:25:26.932 [2024-07-24 05:14:41.424098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.493344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.493417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:26.932 [2024-07-24 05:14:41.493452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.220 ms 00:25:26.932 [2024-07-24 05:14:41.493463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.505088] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:26.932 [2024-07-24 05:14:41.507846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.507925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:26.932 [2024-07-24 05:14:41.507961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.294 ms 00:25:26.932 [2024-07-24 05:14:41.507971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.508114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.508138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:26.932 [2024-07-24 05:14:41.508151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:26.932 [2024-07-24 05:14:41.508162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.508250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.508269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:26.932 [2024-07-24 05:14:41.508297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:26.932 [2024-07-24 05:14:41.508308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.508370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.508384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:26.932 [2024-07-24 05:14:41.508401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:26.932 [2024-07-24 05:14:41.508412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.508450] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:26.932 [2024-07-24 05:14:41.508466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.508476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:26.932 [2024-07-24 05:14:41.508488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:26.932 [2024-07-24 05:14:41.508498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.536241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.536311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.932 [2024-07-24 05:14:41.536346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.718 ms 00:25:26.932 [2024-07-24 05:14:41.536356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.536452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.932 [2024-07-24 05:14:41.536471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.932 [2024-07-24 05:14:41.536483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:26.932 [2024-07-24 05:14:41.536492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.932 [2024-07-24 05:14:41.537848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.305 ms, result 0 00:26:11.276  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 213/1024 [MB] (23 MBps) Copying: 237/1024 [MB] (23 MBps) Copying: 260/1024 [MB] (23 MBps) Copying: 284/1024 [MB] (23 MBps) Copying: 308/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (23 MBps) Copying: 355/1024 [MB] (23 MBps) Copying: 379/1024 [MB] (23 MBps) Copying: 403/1024 [MB] (24 MBps) Copying: 427/1024 [MB] (24 MBps) Copying: 451/1024 [MB] (24 MBps) Copying: 476/1024 [MB] (24 MBps) Copying: 502/1024 [MB] (25 MBps) Copying: 526/1024 [MB] (24 MBps) Copying: 549/1024 [MB] (23 MBps) Copying: 573/1024 [MB] (23 MBps) Copying: 596/1024 [MB] (23 MBps) Copying: 619/1024 [MB] (22 MBps) Copying: 641/1024 [MB] (22 MBps) Copying: 665/1024 [MB] (23 MBps) Copying: 688/1024 [MB] (23 MBps) Copying: 712/1024 [MB] (23 MBps) Copying: 734/1024 [MB] (22 MBps) Copying: 758/1024 [MB] (23 MBps) Copying: 781/1024 [MB] (23 MBps) Copying: 804/1024 [MB] (23 MBps) Copying: 827/1024 [MB] (22 MBps) Copying: 850/1024 [MB] (23 MBps) Copying: 873/1024 [MB] (22 MBps) Copying: 896/1024 [MB] (23 MBps) Copying: 919/1024 [MB] (23 MBps) Copying: 943/1024 [MB] (23 MBps) Copying: 966/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (23 MBps) Copying: 1013/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:15:25.820388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.820478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:11.276 [2024-07-24 05:15:25.820519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:11.276 [2024-07-24 05:15:25.820531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.276 [2024-07-24 05:15:25.822654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:11.276 [2024-07-24 05:15:25.829220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.829260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:11.276 [2024-07-24 05:15:25.829292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.484 ms 00:26:11.276 [2024-07-24 05:15:25.829301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.276 [2024-07-24 05:15:25.841379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.841447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:11.276 [2024-07-24 05:15:25.841482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.948 ms 00:26:11.276 [2024-07-24 05:15:25.841493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.276 [2024-07-24 05:15:25.860709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.860756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:11.276 [2024-07-24 05:15:25.860775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.196 ms 00:26:11.276 [2024-07-24 05:15:25.860787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.276 [2024-07-24 05:15:25.866645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.866683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:11.276 [2024-07-24 05:15:25.866721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:26:11.276 [2024-07-24 05:15:25.866731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.276 [2024-07-24 05:15:25.895102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.276 [2024-07-24 05:15:25.895146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:11.276 [2024-07-24 05:15:25.895164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.323 ms 00:26:11.276 [2024-07-24 05:15:25.895175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:25.916149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:25.916206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:11.536 [2024-07-24 05:15:25.916243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.930 ms 00:26:11.536 [2024-07-24 05:15:25.916253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:25.995486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:25.995550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:11.536 [2024-07-24 05:15:25.995570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.189 ms 00:26:11.536 [2024-07-24 05:15:25.995591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:26.023068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:26.023122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:11.536 [2024-07-24 05:15:26.023154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.454 ms 00:26:11.536 [2024-07-24 05:15:26.023164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:26.051698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:26.051755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:11.536 [2024-07-24 05:15:26.051789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.493 ms 00:26:11.536 [2024-07-24 05:15:26.051800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:26.080992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:26.081032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:11.536 [2024-07-24 05:15:26.081063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.117 ms 00:26:11.536 [2024-07-24 05:15:26.081073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.536 [2024-07-24 05:15:26.108976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.536 [2024-07-24 05:15:26.109015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:11.537 [2024-07-24 05:15:26.109030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.817 ms 00:26:11.537 [2024-07-24 05:15:26.109040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.537 [2024-07-24 05:15:26.109079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:11.537 [2024-07-24 05:15:26.109100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 84480 / 261120 wr_cnt: 1 state: open 00:26:11.537 [2024-07-24 05:15:26.109113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.109993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:11.537 [2024-07-24 05:15:26.110067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:11.538 [2024-07-24 05:15:26.110223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:11.538 [2024-07-24 05:15:26.110233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94c5153-45fe-4502-98b6-72c9a7f5f943 00:26:11.538 [2024-07-24 05:15:26.110250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 84480 00:26:11.538 [2024-07-24 05:15:26.110261] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 85440 00:26:11.538 [2024-07-24 05:15:26.110289] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 84480 00:26:11.538 [2024-07-24 05:15:26.110301] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:26:11.538 [2024-07-24 05:15:26.110311] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:11.538 [2024-07-24 05:15:26.110321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:11.538 [2024-07-24 05:15:26.110331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:11.538 [2024-07-24 05:15:26.110340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:11.538 [2024-07-24 05:15:26.110350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:11.538 [2024-07-24 05:15:26.110360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.538 [2024-07-24 05:15:26.110371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:11.538 [2024-07-24 05:15:26.110395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:26:11.538 [2024-07-24 05:15:26.110405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.125336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.538 [2024-07-24 05:15:26.125371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:11.538 [2024-07-24 05:15:26.125403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.891 ms 00:26:11.538 [2024-07-24 05:15:26.125413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.125848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.538 [2024-07-24 05:15:26.125920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:11.538 [2024-07-24 05:15:26.125939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:26:11.538 [2024-07-24 05:15:26.125950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.158499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.538 [2024-07-24 05:15:26.158541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.538 [2024-07-24 05:15:26.158572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.538 [2024-07-24 05:15:26.158582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.158638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.538 [2024-07-24 05:15:26.158651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.538 [2024-07-24 05:15:26.158661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.538 [2024-07-24 05:15:26.158670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.158741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.538 [2024-07-24 05:15:26.158759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.538 [2024-07-24 05:15:26.158770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.538 [2024-07-24 05:15:26.158779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.538 [2024-07-24 05:15:26.158799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.538 [2024-07-24 05:15:26.158810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.538 [2024-07-24 05:15:26.158831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.538 [2024-07-24 05:15:26.158880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.245759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.245824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.798 [2024-07-24 05:15:26.245905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.245920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.322752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.322812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.798 [2024-07-24 05:15:26.322848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.322895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.798 [2024-07-24 05:15:26.323097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.798 [2024-07-24 05:15:26.323180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.798 [2024-07-24 05:15:26.323377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:11.798 [2024-07-24 05:15:26.323490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.798 [2024-07-24 05:15:26.323603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.798 [2024-07-24 05:15:26.323706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.798 [2024-07-24 05:15:26.323718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.798 [2024-07-24 05:15:26.323729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.798 [2024-07-24 05:15:26.323938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.498 ms, result 0 00:26:13.174 00:26:13.174 00:26:13.174 05:15:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:15.079 05:15:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:15.337 [2024-07-24 05:15:29.794625] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:26:15.337 [2024-07-24 05:15:29.794783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84564 ] 00:26:15.337 [2024-07-24 05:15:29.967797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.596 [2024-07-24 05:15:30.165808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.854 [2024-07-24 05:15:30.446054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:15.854 [2024-07-24 05:15:30.446139] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.114 [2024-07-24 05:15:30.607362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.114 [2024-07-24 05:15:30.607441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.114 [2024-07-24 05:15:30.607479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.114 [2024-07-24 05:15:30.607491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.114 [2024-07-24 05:15:30.607555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.114 [2024-07-24 05:15:30.607574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.114 [2024-07-24 05:15:30.607586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:16.114 [2024-07-24 05:15:30.607601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.114 [2024-07-24 05:15:30.607634] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.114 [2024-07-24 05:15:30.608482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.114 [2024-07-24 05:15:30.608516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.114 [2024-07-24 05:15:30.608528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.115 [2024-07-24 05:15:30.608539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:26:16.115 [2024-07-24 05:15:30.608549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.609772] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:16.115 [2024-07-24 05:15:30.623419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.623477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.115 [2024-07-24 05:15:30.623510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.641 ms 00:26:16.115 [2024-07-24 05:15:30.623521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.623591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.623614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.115 [2024-07-24 05:15:30.623626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:16.115 [2024-07-24 05:15:30.623636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.628066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.628100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.115 [2024-07-24 05:15:30.628130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:26:16.115 [2024-07-24 05:15:30.628140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.628225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.628242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.115 [2024-07-24 05:15:30.628253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:16.115 [2024-07-24 05:15:30.628263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.628318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.628334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.115 [2024-07-24 05:15:30.628345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:16.115 [2024-07-24 05:15:30.628354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.628383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.115 [2024-07-24 05:15:30.632229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.632262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.115 [2024-07-24 05:15:30.632292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:26:16.115 [2024-07-24 05:15:30.632301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.632342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.632357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.115 [2024-07-24 05:15:30.632367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:16.115 [2024-07-24 05:15:30.632377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.632417] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.115 [2024-07-24 05:15:30.632445] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.115 [2024-07-24 05:15:30.632480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.115 [2024-07-24 05:15:30.632501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:16.115 [2024-07-24 05:15:30.632601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.115 [2024-07-24 05:15:30.632618] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.115 [2024-07-24 05:15:30.632630] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:16.115 [2024-07-24 05:15:30.632643] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.115 [2024-07-24 05:15:30.632654] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.115 [2024-07-24 05:15:30.632664] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:16.115 [2024-07-24 05:15:30.632690] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.115 [2024-07-24 05:15:30.632699] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.115 [2024-07-24 05:15:30.632708] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.115 [2024-07-24 05:15:30.632718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.632733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.115 [2024-07-24 05:15:30.632744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:26:16.115 [2024-07-24 05:15:30.632753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.632828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.115 [2024-07-24 05:15:30.632841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.115 [2024-07-24 05:15:30.632851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:16.115 [2024-07-24 05:15:30.632897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.115 [2024-07-24 05:15:30.632992] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.115 [2024-07-24 05:15:30.633008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.115 [2024-07-24 05:15:30.633024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.115 [2024-07-24 05:15:30.633054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.115 [2024-07-24 05:15:30.633082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.115 [2024-07-24 05:15:30.633115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.115 [2024-07-24 05:15:30.633124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:16.115 [2024-07-24 05:15:30.633132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.115 [2024-07-24 05:15:30.633141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.115 [2024-07-24 05:15:30.633152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:16.115 [2024-07-24 05:15:30.633161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.115 [2024-07-24 05:15:30.633178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.115 [2024-07-24 05:15:30.633217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.115 [2024-07-24 05:15:30.633259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.115 [2024-07-24 05:15:30.633284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.115 [2024-07-24 05:15:30.633309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.115 [2024-07-24 05:15:30.633334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.115 [2024-07-24 05:15:30.633350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.115 [2024-07-24 05:15:30.633359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:16.115 [2024-07-24 05:15:30.633367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.115 [2024-07-24 05:15:30.633376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.115 [2024-07-24 05:15:30.633384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:16.115 [2024-07-24 05:15:30.633393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.115 [2024-07-24 05:15:30.633409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:16.115 [2024-07-24 05:15:30.633418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633427] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.115 [2024-07-24 05:15:30.633436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.115 [2024-07-24 05:15:30.633446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.115 [2024-07-24 05:15:30.633458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.115 [2024-07-24 05:15:30.633469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.115 [2024-07-24 05:15:30.633477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.115 [2024-07-24 05:15:30.633486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.116 [2024-07-24 05:15:30.633494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.116 [2024-07-24 05:15:30.633502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.116 [2024-07-24 05:15:30.633511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.116 [2024-07-24 05:15:30.633521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.116 [2024-07-24 05:15:30.633533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:16.116 [2024-07-24 05:15:30.633553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:16.116 [2024-07-24 05:15:30.633562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:16.116 [2024-07-24 05:15:30.633572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:16.116 [2024-07-24 05:15:30.633581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:16.116 [2024-07-24 05:15:30.633591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:16.116 [2024-07-24 05:15:30.633600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:16.116 [2024-07-24 05:15:30.633610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:16.116 [2024-07-24 05:15:30.633619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:16.116 [2024-07-24 05:15:30.633628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:16.116 [2024-07-24 05:15:30.633676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.116 [2024-07-24 05:15:30.633687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.116 [2024-07-24 05:15:30.633711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.116 [2024-07-24 05:15:30.633721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.116 [2024-07-24 05:15:30.633731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.116 [2024-07-24 05:15:30.633741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.633751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.116 [2024-07-24 05:15:30.633761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:26:16.116 [2024-07-24 05:15:30.633772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.672247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.672299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.116 [2024-07-24 05:15:30.672334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.403 ms 00:26:16.116 [2024-07-24 05:15:30.672345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.672449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.672463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.116 [2024-07-24 05:15:30.672475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:16.116 [2024-07-24 05:15:30.672484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.704763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.704813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.116 [2024-07-24 05:15:30.704862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.194 ms 00:26:16.116 [2024-07-24 05:15:30.704908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.704975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.704998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.116 [2024-07-24 05:15:30.705028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.116 [2024-07-24 05:15:30.705046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.705542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.705566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.116 [2024-07-24 05:15:30.705579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:26:16.116 [2024-07-24 05:15:30.705590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.705748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.705766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.116 [2024-07-24 05:15:30.705777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:26:16.116 [2024-07-24 05:15:30.705787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.720887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.720921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.116 [2024-07-24 05:15:30.721002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.069 ms 00:26:16.116 [2024-07-24 05:15:30.721039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.116 [2024-07-24 05:15:30.736374] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:16.116 [2024-07-24 05:15:30.736415] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.116 [2024-07-24 05:15:30.736448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.116 [2024-07-24 05:15:30.736459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.116 [2024-07-24 05:15:30.736471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.255 ms 00:26:16.116 [2024-07-24 05:15:30.736480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.375 [2024-07-24 05:15:30.763732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.375 [2024-07-24 05:15:30.763821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:16.375 [2024-07-24 05:15:30.763880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.207 ms 00:26:16.375 [2024-07-24 05:15:30.763909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.375 [2024-07-24 05:15:30.777048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.375 [2024-07-24 05:15:30.777087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:16.376 [2024-07-24 05:15:30.777119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.109 ms 00:26:16.376 [2024-07-24 05:15:30.777129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.790474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.790527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:16.376 [2024-07-24 05:15:30.790559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.265 ms 00:26:16.376 [2024-07-24 05:15:30.790569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.791368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.791398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.376 [2024-07-24 05:15:30.791453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:26:16.376 [2024-07-24 05:15:30.791465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.865131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.865192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:16.376 [2024-07-24 05:15:30.865228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.635 ms 00:26:16.376 [2024-07-24 05:15:30.865247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.876419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:16.376 [2024-07-24 05:15:30.878603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.878636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:16.376 [2024-07-24 05:15:30.878667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.268 ms 00:26:16.376 [2024-07-24 05:15:30.878677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.878776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.878794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:16.376 [2024-07-24 05:15:30.878806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.376 [2024-07-24 05:15:30.878816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.880205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.880240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.376 [2024-07-24 05:15:30.880254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:26:16.376 [2024-07-24 05:15:30.880263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.880296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.880310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.376 [2024-07-24 05:15:30.880321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:16.376 [2024-07-24 05:15:30.880330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.880367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:16.376 [2024-07-24 05:15:30.880383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.880397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:16.376 [2024-07-24 05:15:30.880407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:16.376 [2024-07-24 05:15:30.880416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.906629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.906669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.376 [2024-07-24 05:15:30.906701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.191 ms 00:26:16.376 [2024-07-24 05:15:30.906719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.906797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.376 [2024-07-24 05:15:30.906815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.376 [2024-07-24 05:15:30.906826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:16.376 [2024-07-24 05:15:30.906836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.376 [2024-07-24 05:15:30.910624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.200 ms, result 0 00:26:57.050  Copying: 1072/1048576 [kB] (1072 kBps) Copying: 2680/1048576 [kB] (1608 kBps) Copying: 11524/1048576 [kB] (8844 kBps) Copying: 38/1024 [MB] (26 MBps) Copying: 65/1024 [MB] (26 MBps) Copying: 92/1024 [MB] (27 MBps) Copying: 120/1024 [MB] (27 MBps) Copying: 148/1024 [MB] (28 MBps) Copying: 176/1024 [MB] (27 MBps) Copying: 203/1024 [MB] (27 MBps) Copying: 231/1024 [MB] (27 MBps) Copying: 259/1024 [MB] (28 MBps) Copying: 286/1024 [MB] (27 MBps) Copying: 314/1024 [MB] (27 MBps) Copying: 342/1024 [MB] (27 MBps) Copying: 369/1024 [MB] (27 MBps) Copying: 397/1024 [MB] (27 MBps) Copying: 424/1024 [MB] (27 MBps) Copying: 451/1024 [MB] (27 MBps) Copying: 479/1024 [MB] (27 MBps) Copying: 506/1024 [MB] (27 MBps) Copying: 534/1024 [MB] (27 MBps) Copying: 561/1024 [MB] (27 MBps) Copying: 588/1024 [MB] (27 MBps) Copying: 616/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (27 MBps) Copying: 670/1024 [MB] (26 MBps) Copying: 697/1024 [MB] (27 MBps) Copying: 724/1024 [MB] (26 MBps) Copying: 751/1024 [MB] (27 MBps) Copying: 778/1024 [MB] (27 MBps) Copying: 805/1024 [MB] (26 MBps) Copying: 833/1024 [MB] (28 MBps) Copying: 861/1024 [MB] (27 MBps) Copying: 889/1024 [MB] (28 MBps) Copying: 917/1024 [MB] (27 MBps) Copying: 944/1024 [MB] (27 MBps) Copying: 972/1024 [MB] (27 MBps) Copying: 1000/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-24 05:16:11.535032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.535137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:57.050 [2024-07-24 05:16:11.535190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:57.050 [2024-07-24 05:16:11.535213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.535262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:57.050 [2024-07-24 05:16:11.540210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.540253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:57.050 [2024-07-24 05:16:11.540278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:26:57.050 [2024-07-24 05:16:11.540299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.540666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.540702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:57.050 [2024-07-24 05:16:11.540735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:26:57.050 [2024-07-24 05:16:11.540756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.556012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.556201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:57.050 [2024-07-24 05:16:11.556325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.221 ms 00:26:57.050 [2024-07-24 05:16:11.556449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.563536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.563675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:57.050 [2024-07-24 05:16:11.563790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.001 ms 00:26:57.050 [2024-07-24 05:16:11.563880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.597869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.598163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:57.050 [2024-07-24 05:16:11.598288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.771 ms 00:26:57.050 [2024-07-24 05:16:11.598351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.615147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.615339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:57.050 [2024-07-24 05:16:11.615512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.632 ms 00:26:57.050 [2024-07-24 05:16:11.615562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.619416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.619584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:57.050 [2024-07-24 05:16:11.619693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:26:57.050 [2024-07-24 05:16:11.619822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.648823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.649039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:57.050 [2024-07-24 05:16:11.649145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.843 ms 00:26:57.050 [2024-07-24 05:16:11.649191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.050 [2024-07-24 05:16:11.677860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.050 [2024-07-24 05:16:11.678046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:57.050 [2024-07-24 05:16:11.678188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.526 ms 00:26:57.050 [2024-07-24 05:16:11.678234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.310 [2024-07-24 05:16:11.706859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.310 [2024-07-24 05:16:11.707021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:57.310 [2024-07-24 05:16:11.707122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.557 ms 00:26:57.310 [2024-07-24 05:16:11.707181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.310 [2024-07-24 05:16:11.735184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.310 [2024-07-24 05:16:11.735341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:57.310 [2024-07-24 05:16:11.735499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.893 ms 00:26:57.310 [2024-07-24 05:16:11.735551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.310 [2024-07-24 05:16:11.735690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:57.310 [2024-07-24 05:16:11.735728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:57.310 [2024-07-24 05:16:11.735759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:57.310 [2024-07-24 05:16:11.735770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.735992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:57.310 [2024-07-24 05:16:11.736564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:57.311 [2024-07-24 05:16:11.736948] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:57.311 [2024-07-24 05:16:11.736962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94c5153-45fe-4502-98b6-72c9a7f5f943 00:26:57.311 [2024-07-24 05:16:11.736989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:57.311 [2024-07-24 05:16:11.737004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 182208 00:26:57.311 [2024-07-24 05:16:11.737013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 180224 00:26:57.311 [2024-07-24 05:16:11.737024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0110 00:26:57.311 [2024-07-24 05:16:11.737037] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:57.311 [2024-07-24 05:16:11.737047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:57.311 [2024-07-24 05:16:11.737056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:57.311 [2024-07-24 05:16:11.737065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:57.311 [2024-07-24 05:16:11.737074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:57.311 [2024-07-24 05:16:11.737084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.311 [2024-07-24 05:16:11.737098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:57.311 [2024-07-24 05:16:11.737109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:26:57.311 [2024-07-24 05:16:11.737119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.752238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.311 [2024-07-24 05:16:11.752265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:57.311 [2024-07-24 05:16:11.752284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.064 ms 00:26:57.311 [2024-07-24 05:16:11.752305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.752670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.311 [2024-07-24 05:16:11.752684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:57.311 [2024-07-24 05:16:11.752695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:26:57.311 [2024-07-24 05:16:11.752705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.788959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.311 [2024-07-24 05:16:11.789003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.311 [2024-07-24 05:16:11.789020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.311 [2024-07-24 05:16:11.789031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.789112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.311 [2024-07-24 05:16:11.789127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.311 [2024-07-24 05:16:11.789139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.311 [2024-07-24 05:16:11.789150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.789248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.311 [2024-07-24 05:16:11.789273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.311 [2024-07-24 05:16:11.789286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.311 [2024-07-24 05:16:11.789298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.789320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.311 [2024-07-24 05:16:11.789334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.311 [2024-07-24 05:16:11.789346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.311 [2024-07-24 05:16:11.789357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.311 [2024-07-24 05:16:11.884642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.311 [2024-07-24 05:16:11.884694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.311 [2024-07-24 05:16:11.884719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.311 [2024-07-24 05:16:11.884730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.962602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.962659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.570 [2024-07-24 05:16:11.962676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.962686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.962794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.962810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.570 [2024-07-24 05:16:11.962829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.962838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.962881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.962895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.570 [2024-07-24 05:16:11.962906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.962916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.963123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.963143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.570 [2024-07-24 05:16:11.963155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.963171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.963223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.963240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:57.570 [2024-07-24 05:16:11.963260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.963270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.963328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.963353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.570 [2024-07-24 05:16:11.963381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.963402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.963488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.570 [2024-07-24 05:16:11.963512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.570 [2024-07-24 05:16:11.963526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.570 [2024-07-24 05:16:11.963538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.570 [2024-07-24 05:16:11.963695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.640 ms, result 0 00:26:58.505 00:26:58.505 00:26:58.505 05:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:00.408 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:00.408 05:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:00.408 [2024-07-24 05:16:14.885089] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:27:00.408 [2024-07-24 05:16:14.885533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85007 ] 00:27:00.667 [2024-07-24 05:16:15.050088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.667 [2024-07-24 05:16:15.257385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.925 [2024-07-24 05:16:15.540341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.925 [2024-07-24 05:16:15.540685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:01.186 [2024-07-24 05:16:15.700150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.700390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:01.186 [2024-07-24 05:16:15.700524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:01.186 [2024-07-24 05:16:15.700696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.700825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.700911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.186 [2024-07-24 05:16:15.701038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:01.186 [2024-07-24 05:16:15.701097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.701233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:01.186 [2024-07-24 05:16:15.702310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:01.186 [2024-07-24 05:16:15.702358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.702373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.186 [2024-07-24 05:16:15.702386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:27:01.186 [2024-07-24 05:16:15.702397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.703666] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:01.186 [2024-07-24 05:16:15.718560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.718605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:01.186 [2024-07-24 05:16:15.718636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.896 ms 00:27:01.186 [2024-07-24 05:16:15.718653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.718723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.718744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:01.186 [2024-07-24 05:16:15.718757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:01.186 [2024-07-24 05:16:15.718767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.723630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.723680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.186 [2024-07-24 05:16:15.723698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:27:01.186 [2024-07-24 05:16:15.723710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.723827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.723860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.186 [2024-07-24 05:16:15.723929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:01.186 [2024-07-24 05:16:15.723941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.724029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.724047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:01.186 [2024-07-24 05:16:15.724060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:01.186 [2024-07-24 05:16:15.724070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.724104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:01.186 [2024-07-24 05:16:15.728244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.728281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.186 [2024-07-24 05:16:15.728295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.149 ms 00:27:01.186 [2024-07-24 05:16:15.728305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.728354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.728370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:01.186 [2024-07-24 05:16:15.728382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:01.186 [2024-07-24 05:16:15.728392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.728433] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:01.186 [2024-07-24 05:16:15.728472] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:01.186 [2024-07-24 05:16:15.728517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:01.186 [2024-07-24 05:16:15.728540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:01.186 [2024-07-24 05:16:15.728652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:01.186 [2024-07-24 05:16:15.728672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:01.186 [2024-07-24 05:16:15.728687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:01.186 [2024-07-24 05:16:15.728701] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:01.186 [2024-07-24 05:16:15.728713] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:01.186 [2024-07-24 05:16:15.728724] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:01.186 [2024-07-24 05:16:15.728734] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:01.186 [2024-07-24 05:16:15.728744] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:01.186 [2024-07-24 05:16:15.728754] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:01.186 [2024-07-24 05:16:15.728780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.728796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:01.186 [2024-07-24 05:16:15.728823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:27:01.186 [2024-07-24 05:16:15.728834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.728955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.186 [2024-07-24 05:16:15.728992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:01.186 [2024-07-24 05:16:15.729005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:01.186 [2024-07-24 05:16:15.729016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.186 [2024-07-24 05:16:15.729149] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:01.186 [2024-07-24 05:16:15.729168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:01.186 [2024-07-24 05:16:15.729187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:01.186 [2024-07-24 05:16:15.729198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:01.186 [2024-07-24 05:16:15.729220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:01.186 [2024-07-24 05:16:15.729255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:01.186 [2024-07-24 05:16:15.729265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:01.186 [2024-07-24 05:16:15.729284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:01.186 [2024-07-24 05:16:15.729294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:01.186 [2024-07-24 05:16:15.729304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:01.186 [2024-07-24 05:16:15.729313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:01.186 [2024-07-24 05:16:15.729324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:01.186 [2024-07-24 05:16:15.729333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:01.186 [2024-07-24 05:16:15.729353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:01.186 [2024-07-24 05:16:15.729362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:01.186 [2024-07-24 05:16:15.729395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.186 [2024-07-24 05:16:15.729429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:01.186 [2024-07-24 05:16:15.729439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:01.186 [2024-07-24 05:16:15.729448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.187 [2024-07-24 05:16:15.729457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:01.187 [2024-07-24 05:16:15.729467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.187 [2024-07-24 05:16:15.729485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:01.187 [2024-07-24 05:16:15.729495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729505] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.187 [2024-07-24 05:16:15.729515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:01.187 [2024-07-24 05:16:15.729524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:01.187 [2024-07-24 05:16:15.729545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:01.187 [2024-07-24 05:16:15.729557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:01.187 [2024-07-24 05:16:15.729567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:01.187 [2024-07-24 05:16:15.729577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:01.187 [2024-07-24 05:16:15.729586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:01.187 [2024-07-24 05:16:15.729610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:01.187 [2024-07-24 05:16:15.729649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:01.187 [2024-07-24 05:16:15.729658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729667] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:01.187 [2024-07-24 05:16:15.729678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:01.187 [2024-07-24 05:16:15.729688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:01.187 [2024-07-24 05:16:15.729698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.187 [2024-07-24 05:16:15.729708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:01.187 [2024-07-24 05:16:15.729718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:01.187 [2024-07-24 05:16:15.729727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:01.187 [2024-07-24 05:16:15.729736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:01.187 [2024-07-24 05:16:15.729746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:01.187 [2024-07-24 05:16:15.729755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:01.187 [2024-07-24 05:16:15.729766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:01.187 [2024-07-24 05:16:15.729780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.729792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:01.187 [2024-07-24 05:16:15.729802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:01.187 [2024-07-24 05:16:15.729814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:01.187 [2024-07-24 05:16:15.729824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:01.187 [2024-07-24 05:16:15.729835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:01.187 [2024-07-24 05:16:15.729845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:01.187 [2024-07-24 05:16:15.729856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:01.187 [2024-07-24 05:16:15.729866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:01.187 [2024-07-24 05:16:15.729877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:01.187 [2024-07-24 05:16:15.729888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.729899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.729909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.729937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.729950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:01.187 [2024-07-24 05:16:15.729960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:01.187 [2024-07-24 05:16:15.729972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.730004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:01.187 [2024-07-24 05:16:15.730015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:01.187 [2024-07-24 05:16:15.730040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:01.187 [2024-07-24 05:16:15.730050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:01.187 [2024-07-24 05:16:15.730061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.730071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:01.187 [2024-07-24 05:16:15.730082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:27:01.187 [2024-07-24 05:16:15.730092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.770570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.770662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.187 [2024-07-24 05:16:15.770699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.421 ms 00:27:01.187 [2024-07-24 05:16:15.770711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.770831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.770848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:01.187 [2024-07-24 05:16:15.771119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:01.187 [2024-07-24 05:16:15.771166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.805480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.805714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.187 [2024-07-24 05:16:15.805743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.180 ms 00:27:01.187 [2024-07-24 05:16:15.805755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.805822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.805840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.187 [2024-07-24 05:16:15.805853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:01.187 [2024-07-24 05:16:15.806045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.806499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.806529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.187 [2024-07-24 05:16:15.806543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:27:01.187 [2024-07-24 05:16:15.806553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.187 [2024-07-24 05:16:15.806696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.187 [2024-07-24 05:16:15.806714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.187 [2024-07-24 05:16:15.806725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:27:01.187 [2024-07-24 05:16:15.806734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.822591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.822790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.447 [2024-07-24 05:16:15.822989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.827 ms 00:27:01.447 [2024-07-24 05:16:15.823139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.838074] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:01.447 [2024-07-24 05:16:15.838317] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:01.447 [2024-07-24 05:16:15.838436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.838454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:01.447 [2024-07-24 05:16:15.838468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.095 ms 00:27:01.447 [2024-07-24 05:16:15.838479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.865428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.865652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:01.447 [2024-07-24 05:16:15.865770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.900 ms 00:27:01.447 [2024-07-24 05:16:15.865819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.880204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.880241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:01.447 [2024-07-24 05:16:15.880272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.176 ms 00:27:01.447 [2024-07-24 05:16:15.880282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.893876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.893914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:01.447 [2024-07-24 05:16:15.893944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.553 ms 00:27:01.447 [2024-07-24 05:16:15.893954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.894652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.894674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:01.447 [2024-07-24 05:16:15.894686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:27:01.447 [2024-07-24 05:16:15.894696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.964448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.964680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:01.447 [2024-07-24 05:16:15.964886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.725 ms 00:27:01.447 [2024-07-24 05:16:15.965022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.977650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:01.447 [2024-07-24 05:16:15.980438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.980615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:01.447 [2024-07-24 05:16:15.980782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.312 ms 00:27:01.447 [2024-07-24 05:16:15.980834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.981066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.981093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:01.447 [2024-07-24 05:16:15.981107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:01.447 [2024-07-24 05:16:15.981119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.981816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.981895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:01.447 [2024-07-24 05:16:15.981913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:27:01.447 [2024-07-24 05:16:15.981925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.981963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.981979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:01.447 [2024-07-24 05:16:15.981992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:01.447 [2024-07-24 05:16:15.982003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:15.982053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:01.447 [2024-07-24 05:16:15.982071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:15.982087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:01.447 [2024-07-24 05:16:15.982099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:01.447 [2024-07-24 05:16:15.982110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:16.009549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:16.009588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:01.447 [2024-07-24 05:16:16.009634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.415 ms 00:27:01.447 [2024-07-24 05:16:16.009651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:16.009726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.447 [2024-07-24 05:16:16.009743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:01.447 [2024-07-24 05:16:16.009754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:01.447 [2024-07-24 05:16:16.009763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.447 [2024-07-24 05:16:16.011346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.533 ms, result 0 00:27:46.051  Copying: 25/1024 [MB] (25 MBps) Copying: 47/1024 [MB] (22 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (22 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 185/1024 [MB] (22 MBps) Copying: 208/1024 [MB] (22 MBps) Copying: 231/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (22 MBps) Copying: 276/1024 [MB] (22 MBps) Copying: 298/1024 [MB] (22 MBps) Copying: 321/1024 [MB] (22 MBps) Copying: 344/1024 [MB] (22 MBps) Copying: 367/1024 [MB] (22 MBps) Copying: 390/1024 [MB] (23 MBps) Copying: 413/1024 [MB] (22 MBps) Copying: 436/1024 [MB] (23 MBps) Copying: 459/1024 [MB] (23 MBps) Copying: 482/1024 [MB] (22 MBps) Copying: 505/1024 [MB] (23 MBps) Copying: 528/1024 [MB] (23 MBps) Copying: 551/1024 [MB] (23 MBps) Copying: 574/1024 [MB] (22 MBps) Copying: 597/1024 [MB] (22 MBps) Copying: 620/1024 [MB] (23 MBps) Copying: 643/1024 [MB] (22 MBps) Copying: 666/1024 [MB] (23 MBps) Copying: 689/1024 [MB] (23 MBps) Copying: 712/1024 [MB] (23 MBps) Copying: 734/1024 [MB] (21 MBps) Copying: 759/1024 [MB] (24 MBps) Copying: 782/1024 [MB] (23 MBps) Copying: 806/1024 [MB] (23 MBps) Copying: 830/1024 [MB] (23 MBps) Copying: 854/1024 [MB] (23 MBps) Copying: 879/1024 [MB] (24 MBps) Copying: 903/1024 [MB] (24 MBps) Copying: 927/1024 [MB] (23 MBps) Copying: 950/1024 [MB] (23 MBps) Copying: 974/1024 [MB] (23 MBps) Copying: 999/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:17:00.513211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.513301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.051 [2024-07-24 05:17:00.513331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.051 [2024-07-24 05:17:00.513346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.513394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.051 [2024-07-24 05:17:00.518315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.518363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.051 [2024-07-24 05:17:00.518393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:27:46.051 [2024-07-24 05:17:00.518416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.518738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.518762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.051 [2024-07-24 05:17:00.518785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:27:46.051 [2024-07-24 05:17:00.518798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.523013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.523048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:46.051 [2024-07-24 05:17:00.523062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:27:46.051 [2024-07-24 05:17:00.523073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.529672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.529719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:46.051 [2024-07-24 05:17:00.529749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:27:46.051 [2024-07-24 05:17:00.529759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.557491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.557528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:46.051 [2024-07-24 05:17:00.557559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.662 ms 00:27:46.051 [2024-07-24 05:17:00.557568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.573252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.573289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:46.051 [2024-07-24 05:17:00.573320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.636 ms 00:27:46.051 [2024-07-24 05:17:00.573330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.051 [2024-07-24 05:17:00.577294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.051 [2024-07-24 05:17:00.577337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:46.051 [2024-07-24 05:17:00.577375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:27:46.051 [2024-07-24 05:17:00.577386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.052 [2024-07-24 05:17:00.604434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.052 [2024-07-24 05:17:00.604471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:46.052 [2024-07-24 05:17:00.604501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.014 ms 00:27:46.052 [2024-07-24 05:17:00.604511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.052 [2024-07-24 05:17:00.631236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.052 [2024-07-24 05:17:00.631272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:46.052 [2024-07-24 05:17:00.631301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.686 ms 00:27:46.052 [2024-07-24 05:17:00.631311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.052 [2024-07-24 05:17:00.657740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.052 [2024-07-24 05:17:00.657777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:46.052 [2024-07-24 05:17:00.657821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.391 ms 00:27:46.052 [2024-07-24 05:17:00.657830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.312 [2024-07-24 05:17:00.687208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.312 [2024-07-24 05:17:00.687245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:46.312 [2024-07-24 05:17:00.687275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.267 ms 00:27:46.312 [2024-07-24 05:17:00.687284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.312 [2024-07-24 05:17:00.687323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.312 [2024-07-24 05:17:00.687344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.312 [2024-07-24 05:17:00.687357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:27:46.312 [2024-07-24 05:17:00.687368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.312 [2024-07-24 05:17:00.687734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.687997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.313 [2024-07-24 05:17:00.688614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.313 [2024-07-24 05:17:00.688625] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94c5153-45fe-4502-98b6-72c9a7f5f943 00:27:46.313 [2024-07-24 05:17:00.688643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:27:46.313 [2024-07-24 05:17:00.688654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:46.313 [2024-07-24 05:17:00.688664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:46.313 [2024-07-24 05:17:00.688675] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:46.313 [2024-07-24 05:17:00.688685] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.313 [2024-07-24 05:17:00.688696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.313 [2024-07-24 05:17:00.688707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.313 [2024-07-24 05:17:00.688717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.313 [2024-07-24 05:17:00.688726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.313 [2024-07-24 05:17:00.688737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.313 [2024-07-24 05:17:00.688748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.313 [2024-07-24 05:17:00.688764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:27:46.313 [2024-07-24 05:17:00.688775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.313 [2024-07-24 05:17:00.703287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.313 [2024-07-24 05:17:00.703321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.313 [2024-07-24 05:17:00.703363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.472 ms 00:27:46.313 [2024-07-24 05:17:00.703373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.313 [2024-07-24 05:17:00.703819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.313 [2024-07-24 05:17:00.703872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.313 [2024-07-24 05:17:00.703902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:27:46.313 [2024-07-24 05:17:00.703918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.313 [2024-07-24 05:17:00.735098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.313 [2024-07-24 05:17:00.735143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.313 [2024-07-24 05:17:00.735175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.313 [2024-07-24 05:17:00.735184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.735241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.735255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.314 [2024-07-24 05:17:00.735266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.735288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.735391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.735409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.314 [2024-07-24 05:17:00.735420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.735429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.735476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.735490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.314 [2024-07-24 05:17:00.735501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.735511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.819024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.819084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.314 [2024-07-24 05:17:00.819117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.819126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.890570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.890625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.314 [2024-07-24 05:17:00.890657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.890674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.890746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.890762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.314 [2024-07-24 05:17:00.890772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.890781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.890842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.890880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.314 [2024-07-24 05:17:00.890910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.890919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.891048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.891067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.314 [2024-07-24 05:17:00.891078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.891088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.891137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.891153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:46.314 [2024-07-24 05:17:00.891164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.891174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.891269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.891299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.314 [2024-07-24 05:17:00.891310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.891321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.891369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.314 [2024-07-24 05:17:00.891385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.314 [2024-07-24 05:17:00.891397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.314 [2024-07-24 05:17:00.891407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.314 [2024-07-24 05:17:00.891584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.354 ms, result 0 00:27:47.249 00:27:47.249 00:27:47.249 05:17:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:49.149 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:49.149 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:49.149 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:49.149 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:49.149 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:49.408 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:49.408 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:49.408 05:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:49.408 Process with pid 83072 is not found 00:27:49.408 05:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83072 00:27:49.408 05:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83072 ']' 00:27:49.408 05:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83072 00:27:49.408 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83072) - No such process 00:27:49.408 05:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83072 is not found' 00:27:49.408 05:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:49.667 Remove shared memory files 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:49.667 ************************************ 00:27:49.667 END TEST ftl_dirty_shutdown 00:27:49.667 ************************************ 00:27:49.667 00:27:49.667 real 4m0.073s 00:27:49.667 user 4m39.041s 00:27:49.667 sys 0m35.692s 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:49.667 05:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:49.926 05:17:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:49.927 05:17:04 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:49.927 05:17:04 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:49.927 05:17:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:49.927 ************************************ 00:27:49.927 START TEST ftl_upgrade_shutdown 00:27:49.927 ************************************ 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:49.927 * Looking for test storage... 00:27:49.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85553 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85553 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85553 ']' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:49.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:49.927 05:17:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:49.927 [2024-07-24 05:17:04.555024] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:27:49.927 [2024-07-24 05:17:04.555151] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85553 ] 00:27:50.186 [2024-07-24 05:17:04.722598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.445 [2024-07-24 05:17:04.949531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:51.013 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=basen1 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:27:51.581 05:17:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:27:51.840 { 00:27:51.840 "name": "basen1", 00:27:51.840 "aliases": [ 00:27:51.840 "798f340a-ed67-4e13-a46a-89c7b76217ee" 00:27:51.840 ], 00:27:51.840 "product_name": "NVMe disk", 00:27:51.840 "block_size": 4096, 00:27:51.840 "num_blocks": 1310720, 00:27:51.840 "uuid": "798f340a-ed67-4e13-a46a-89c7b76217ee", 00:27:51.840 "assigned_rate_limits": { 00:27:51.840 "rw_ios_per_sec": 0, 00:27:51.840 "rw_mbytes_per_sec": 0, 00:27:51.840 "r_mbytes_per_sec": 0, 00:27:51.840 "w_mbytes_per_sec": 0 00:27:51.840 }, 00:27:51.840 "claimed": true, 00:27:51.840 "claim_type": "read_many_write_one", 00:27:51.840 "zoned": false, 00:27:51.840 "supported_io_types": { 00:27:51.840 "read": true, 00:27:51.840 "write": true, 00:27:51.840 "unmap": true, 00:27:51.840 "flush": true, 00:27:51.840 "reset": true, 00:27:51.840 "nvme_admin": true, 00:27:51.840 "nvme_io": true, 00:27:51.840 "nvme_io_md": false, 00:27:51.840 "write_zeroes": true, 00:27:51.840 "zcopy": false, 00:27:51.840 "get_zone_info": false, 00:27:51.840 "zone_management": false, 00:27:51.840 "zone_append": false, 00:27:51.840 "compare": true, 00:27:51.840 "compare_and_write": false, 00:27:51.840 "abort": true, 00:27:51.840 "seek_hole": false, 00:27:51.840 "seek_data": false, 00:27:51.840 "copy": true, 00:27:51.840 "nvme_iov_md": false 00:27:51.840 }, 00:27:51.840 "driver_specific": { 00:27:51.840 "nvme": [ 00:27:51.840 { 00:27:51.840 "pci_address": "0000:00:11.0", 00:27:51.840 "trid": { 00:27:51.840 "trtype": "PCIe", 00:27:51.840 "traddr": "0000:00:11.0" 00:27:51.840 }, 00:27:51.840 "ctrlr_data": { 00:27:51.840 "cntlid": 0, 00:27:51.840 "vendor_id": "0x1b36", 00:27:51.840 "model_number": "QEMU NVMe Ctrl", 00:27:51.840 "serial_number": "12341", 00:27:51.840 "firmware_revision": "8.0.0", 00:27:51.840 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:51.840 "oacs": { 00:27:51.840 "security": 0, 00:27:51.840 "format": 1, 00:27:51.840 "firmware": 0, 00:27:51.840 "ns_manage": 1 00:27:51.840 }, 00:27:51.840 "multi_ctrlr": false, 00:27:51.840 "ana_reporting": false 00:27:51.840 }, 00:27:51.840 "vs": { 00:27:51.840 "nvme_version": "1.4" 00:27:51.840 }, 00:27:51.840 "ns_data": { 00:27:51.840 "id": 1, 00:27:51.840 "can_share": false 00:27:51.840 } 00:27:51.840 } 00:27:51.840 ], 00:27:51.840 "mp_policy": "active_passive" 00:27:51.840 } 00:27:51.840 } 00:27:51.840 ]' 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # nb=1310720 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # echo 5120 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:51.840 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:52.099 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7fb6833e-7570-475f-a7e7-ef9b16cc08ed 00:27:52.099 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:52.099 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fb6833e-7570-475f-a7e7-ef9b16cc08ed 00:27:52.358 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:52.617 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5163fdba-35a2-4bb0-9436-603272b3e9ec 00:27:52.617 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5163fdba-35a2-4bb0-9436-603272b3e9ec 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 ]] 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 5120 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bdev_name=3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local bdev_info 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bs 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local nb 00:27:52.617 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 00:27:52.876 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:27:52.876 { 00:27:52.876 "name": "3ae83d0c-98b0-47eb-bf53-14ac69adc4b6", 00:27:52.876 "aliases": [ 00:27:52.876 "lvs/basen1p0" 00:27:52.876 ], 00:27:52.876 "product_name": "Logical Volume", 00:27:52.876 "block_size": 4096, 00:27:52.876 "num_blocks": 5242880, 00:27:52.876 "uuid": "3ae83d0c-98b0-47eb-bf53-14ac69adc4b6", 00:27:52.876 "assigned_rate_limits": { 00:27:52.876 "rw_ios_per_sec": 0, 00:27:52.876 "rw_mbytes_per_sec": 0, 00:27:52.876 "r_mbytes_per_sec": 0, 00:27:52.876 "w_mbytes_per_sec": 0 00:27:52.876 }, 00:27:52.876 "claimed": false, 00:27:52.876 "zoned": false, 00:27:52.876 "supported_io_types": { 00:27:52.876 "read": true, 00:27:52.876 "write": true, 00:27:52.876 "unmap": true, 00:27:52.876 "flush": false, 00:27:52.876 "reset": true, 00:27:52.876 "nvme_admin": false, 00:27:52.876 "nvme_io": false, 00:27:52.876 "nvme_io_md": false, 00:27:52.876 "write_zeroes": true, 00:27:52.876 "zcopy": false, 00:27:52.876 "get_zone_info": false, 00:27:52.876 "zone_management": false, 00:27:52.876 "zone_append": false, 00:27:52.876 "compare": false, 00:27:52.876 "compare_and_write": false, 00:27:52.876 "abort": false, 00:27:52.876 "seek_hole": true, 00:27:52.876 "seek_data": true, 00:27:52.876 "copy": false, 00:27:52.876 "nvme_iov_md": false 00:27:52.876 }, 00:27:52.876 "driver_specific": { 00:27:52.876 "lvol": { 00:27:52.876 "lvol_store_uuid": "5163fdba-35a2-4bb0-9436-603272b3e9ec", 00:27:52.876 "base_bdev": "basen1", 00:27:52.876 "thin_provision": true, 00:27:52.876 "num_allocated_clusters": 0, 00:27:52.876 "snapshot": false, 00:27:52.876 "clone": false, 00:27:52.876 "esnap_clone": false 00:27:52.876 } 00:27:52.876 } 00:27:52.876 } 00:27:52.876 ]' 00:27:52.876 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:27:52.876 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # bs=4096 00:27:52.876 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # nb=5242880 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bdev_size=20480 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # echo 20480 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:53.136 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:53.394 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:53.394 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:53.394 05:17:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:53.654 05:17:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:53.654 05:17:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:53.654 05:17:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3ae83d0c-98b0-47eb-bf53-14ac69adc4b6 -c cachen1p0 --l2p_dram_limit 2 00:27:53.914 [2024-07-24 05:17:08.316265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.316545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:53.914 [2024-07-24 05:17:08.316578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:53.914 [2024-07-24 05:17:08.316595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.316680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.316701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:53.914 [2024-07-24 05:17:08.316715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:53.914 [2024-07-24 05:17:08.316729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.316761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:53.914 [2024-07-24 05:17:08.317799] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:53.914 [2024-07-24 05:17:08.317825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.317842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:53.914 [2024-07-24 05:17:08.317866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:27:53.914 [2024-07-24 05:17:08.317884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.318030] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 915daee4-481d-4ec6-a3d6-43c6817ef5a6 00:27:53.914 [2024-07-24 05:17:08.319095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.319137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:53.914 [2024-07-24 05:17:08.319157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:53.914 [2024-07-24 05:17:08.319169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.323626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.323670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:53.914 [2024-07-24 05:17:08.323691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.363 ms 00:27:53.914 [2024-07-24 05:17:08.323703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.323781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.323799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:53.914 [2024-07-24 05:17:08.323814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:53.914 [2024-07-24 05:17:08.323833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.323983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.324001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:53.914 [2024-07-24 05:17:08.324019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:53.914 [2024-07-24 05:17:08.324031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.324067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:53.914 [2024-07-24 05:17:08.328446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.328484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:53.914 [2024-07-24 05:17:08.328516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.393 ms 00:27:53.914 [2024-07-24 05:17:08.328529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.328566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.914 [2024-07-24 05:17:08.328584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:53.914 [2024-07-24 05:17:08.328596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:53.914 [2024-07-24 05:17:08.328608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.914 [2024-07-24 05:17:08.328650] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:53.914 [2024-07-24 05:17:08.328803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:53.914 [2024-07-24 05:17:08.328821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:53.914 [2024-07-24 05:17:08.328865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:53.914 [2024-07-24 05:17:08.328901] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:53.915 [2024-07-24 05:17:08.328916] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:53.915 [2024-07-24 05:17:08.328929] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:53.915 [2024-07-24 05:17:08.328947] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:53.915 [2024-07-24 05:17:08.328958] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:53.915 [2024-07-24 05:17:08.328987] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:53.915 [2024-07-24 05:17:08.329000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.915 [2024-07-24 05:17:08.329013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:53.915 [2024-07-24 05:17:08.329025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:27:53.915 [2024-07-24 05:17:08.329038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.915 [2024-07-24 05:17:08.329162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.915 [2024-07-24 05:17:08.329180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:53.915 [2024-07-24 05:17:08.329193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:27:53.915 [2024-07-24 05:17:08.329210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.915 [2024-07-24 05:17:08.329334] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:53.915 [2024-07-24 05:17:08.329358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:53.915 [2024-07-24 05:17:08.329371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:53.915 [2024-07-24 05:17:08.329412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:53.915 [2024-07-24 05:17:08.329452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:53.915 [2024-07-24 05:17:08.329464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:53.915 [2024-07-24 05:17:08.329479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:53.915 [2024-07-24 05:17:08.329505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:53.915 [2024-07-24 05:17:08.329517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:53.915 [2024-07-24 05:17:08.329542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:53.915 [2024-07-24 05:17:08.329555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:53.915 [2024-07-24 05:17:08.329582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:53.915 [2024-07-24 05:17:08.329594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:53.915 [2024-07-24 05:17:08.329619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:53.915 [2024-07-24 05:17:08.329633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:53.915 [2024-07-24 05:17:08.329659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:53.915 [2024-07-24 05:17:08.329670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:53.915 [2024-07-24 05:17:08.329695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:53.915 [2024-07-24 05:17:08.329708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:53.915 [2024-07-24 05:17:08.329733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:53.915 [2024-07-24 05:17:08.329745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:53.915 [2024-07-24 05:17:08.329770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:53.915 [2024-07-24 05:17:08.329785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:53.915 [2024-07-24 05:17:08.329812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:53.915 [2024-07-24 05:17:08.329825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:53.915 [2024-07-24 05:17:08.329851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.329906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:53.915 [2024-07-24 05:17:08.329919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:53.915 [2024-07-24 05:17:08.330216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.330282] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:53.915 [2024-07-24 05:17:08.330328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:53.915 [2024-07-24 05:17:08.330449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:53.915 [2024-07-24 05:17:08.330515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:53.915 [2024-07-24 05:17:08.330560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:53.915 [2024-07-24 05:17:08.330684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:53.915 [2024-07-24 05:17:08.330739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:53.915 [2024-07-24 05:17:08.330781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:53.915 [2024-07-24 05:17:08.330915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:53.915 [2024-07-24 05:17:08.330968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:53.915 [2024-07-24 05:17:08.331016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:53.915 [2024-07-24 05:17:08.331129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:53.915 [2024-07-24 05:17:08.331164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:53.915 [2024-07-24 05:17:08.331207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:53.915 [2024-07-24 05:17:08.331220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:53.915 [2024-07-24 05:17:08.331248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:53.915 [2024-07-24 05:17:08.331260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:53.915 [2024-07-24 05:17:08.331358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:53.915 [2024-07-24 05:17:08.331372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:53.915 [2024-07-24 05:17:08.331399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:53.915 [2024-07-24 05:17:08.331412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:53.915 [2024-07-24 05:17:08.331425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:53.915 [2024-07-24 05:17:08.331471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.915 [2024-07-24 05:17:08.331487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:53.915 [2024-07-24 05:17:08.331502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.185 ms 00:27:53.915 [2024-07-24 05:17:08.331514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.915 [2024-07-24 05:17:08.331578] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:53.915 [2024-07-24 05:17:08.331597] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:57.202 [2024-07-24 05:17:11.408383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.408449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:57.202 [2024-07-24 05:17:11.408487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3076.816 ms 00:27:57.202 [2024-07-24 05:17:11.408499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.435393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.435465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:57.202 [2024-07-24 05:17:11.435502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.664 ms 00:27:57.202 [2024-07-24 05:17:11.435513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.435644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.435663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:57.202 [2024-07-24 05:17:11.435681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:57.202 [2024-07-24 05:17:11.435691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.466709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.466762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:57.202 [2024-07-24 05:17:11.466797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.948 ms 00:27:57.202 [2024-07-24 05:17:11.466808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.466898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.466915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:57.202 [2024-07-24 05:17:11.466934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:57.202 [2024-07-24 05:17:11.466945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.467388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.467412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:57.202 [2024-07-24 05:17:11.467428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:27:57.202 [2024-07-24 05:17:11.467467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.467547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.467568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:57.202 [2024-07-24 05:17:11.467583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:57.202 [2024-07-24 05:17:11.467595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.482041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.482081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:57.202 [2024-07-24 05:17:11.482098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.417 ms 00:27:57.202 [2024-07-24 05:17:11.482110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.493104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:57.202 [2024-07-24 05:17:11.493921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.493955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:57.202 [2024-07-24 05:17:11.493972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.726 ms 00:27:57.202 [2024-07-24 05:17:11.493985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.533684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.533762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:57.202 [2024-07-24 05:17:11.533782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.653 ms 00:27:57.202 [2024-07-24 05:17:11.533795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.533931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.533954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:57.202 [2024-07-24 05:17:11.533982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:57.202 [2024-07-24 05:17:11.533998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.559274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.559330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:57.202 [2024-07-24 05:17:11.559347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.218 ms 00:27:57.202 [2024-07-24 05:17:11.559362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.585095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.585168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:57.202 [2024-07-24 05:17:11.585185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.689 ms 00:27:57.202 [2024-07-24 05:17:11.585198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.202 [2024-07-24 05:17:11.585788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.202 [2024-07-24 05:17:11.585813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:57.202 [2024-07-24 05:17:11.585829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:27:57.202 [2024-07-24 05:17:11.585850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.685128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.685216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:57.203 [2024-07-24 05:17:11.685237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.232 ms 00:27:57.203 [2024-07-24 05:17:11.685256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.713315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.713420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:57.203 [2024-07-24 05:17:11.713440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.977 ms 00:27:57.203 [2024-07-24 05:17:11.713453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.743712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.743773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:57.203 [2024-07-24 05:17:11.743801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.210 ms 00:27:57.203 [2024-07-24 05:17:11.743815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.769732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.769792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:57.203 [2024-07-24 05:17:11.769809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.800 ms 00:27:57.203 [2024-07-24 05:17:11.769821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.769899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.769921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:57.203 [2024-07-24 05:17:11.769934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:57.203 [2024-07-24 05:17:11.769948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.770054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.203 [2024-07-24 05:17:11.770077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:57.203 [2024-07-24 05:17:11.770104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:57.203 [2024-07-24 05:17:11.770116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.203 [2024-07-24 05:17:11.771660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3454.739 ms, result 0 00:27:57.203 { 00:27:57.203 "name": "ftl", 00:27:57.203 "uuid": "915daee4-481d-4ec6-a3d6-43c6817ef5a6" 00:27:57.203 } 00:27:57.203 05:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:57.462 [2024-07-24 05:17:12.058444] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.462 05:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:57.721 05:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:57.979 [2024-07-24 05:17:12.494940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:57.979 05:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:58.237 [2024-07-24 05:17:12.695873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:58.237 05:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:58.495 Fill FTL, iteration 1 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85671 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85671 /var/tmp/spdk.tgt.sock 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85671 ']' 00:27:58.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.495 05:17:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.753 [2024-07-24 05:17:13.162113] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:27:58.753 [2024-07-24 05:17:13.162306] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85671 ] 00:27:58.753 [2024-07-24 05:17:13.333670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.010 [2024-07-24 05:17:13.534606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.576 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.577 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:59.577 05:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:59.835 ftln1 00:27:59.835 05:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:59.835 05:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:00.092 05:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85671 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85671 ']' 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85671 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85671 00:28:00.351 killing process with pid 85671 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85671' 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85671 00:28:00.351 05:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85671 00:28:02.252 05:17:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:02.252 05:17:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:02.252 [2024-07-24 05:17:16.687398] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:02.252 [2024-07-24 05:17:16.687590] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85724 ] 00:28:02.252 [2024-07-24 05:17:16.859353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.511 [2024-07-24 05:17:17.022661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.108  Copying: 208/1024 [MB] (208 MBps) Copying: 413/1024 [MB] (205 MBps) Copying: 618/1024 [MB] (205 MBps) Copying: 826/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:28:09.108 00:28:09.108 Calculate MD5 checksum, iteration 1 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:09.108 05:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.108 [2024-07-24 05:17:23.505782] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:09.108 [2024-07-24 05:17:23.506001] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85794 ] 00:28:09.108 [2024-07-24 05:17:23.675628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.366 [2024-07-24 05:17:23.843694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.615  Copying: 497/1024 [MB] (497 MBps) Copying: 985/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 493 MBps) 00:28:12.615 00:28:12.615 05:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:12.615 05:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:15.164 Fill FTL, iteration 2 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=147d4511d2c9b626ebe5dac124c95e73 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:15.164 05:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:15.164 [2024-07-24 05:17:29.306562] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:15.164 [2024-07-24 05:17:29.306752] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85851 ] 00:28:15.164 [2024-07-24 05:17:29.479690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.164 [2024-07-24 05:17:29.691415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.733  Copying: 205/1024 [MB] (205 MBps) Copying: 400/1024 [MB] (195 MBps) Copying: 588/1024 [MB] (188 MBps) Copying: 787/1024 [MB] (199 MBps) Copying: 987/1024 [MB] (200 MBps) Copying: 1024/1024 [MB] (average 197 MBps) 00:28:21.733 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:21.733 Calculate MD5 checksum, iteration 2 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:21.733 05:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:21.990 [2024-07-24 05:17:36.371895] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:21.990 [2024-07-24 05:17:36.372068] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85925 ] 00:28:21.990 [2024-07-24 05:17:36.543736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.249 [2024-07-24 05:17:36.712461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.117  Copying: 485/1024 [MB] (485 MBps) Copying: 973/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:28:26.117 00:28:26.117 05:17:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:26.117 05:17:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:28.020 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:28.020 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=38098b371c6ae0248a6ed155250c8645 00:28:28.020 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:28.020 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:28.020 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:28.279 [2024-07-24 05:17:42.872345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.279 [2024-07-24 05:17:42.872400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:28.279 [2024-07-24 05:17:42.872420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:28.279 [2024-07-24 05:17:42.872438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.279 [2024-07-24 05:17:42.872473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.279 [2024-07-24 05:17:42.872488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:28.279 [2024-07-24 05:17:42.872499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:28.279 [2024-07-24 05:17:42.872509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.279 [2024-07-24 05:17:42.872548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.279 [2024-07-24 05:17:42.872561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:28.279 [2024-07-24 05:17:42.872572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:28.279 [2024-07-24 05:17:42.872582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.279 [2024-07-24 05:17:42.872691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.321 ms, result 0 00:28:28.279 true 00:28:28.279 05:17:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:28.538 { 00:28:28.538 "name": "ftl", 00:28:28.538 "properties": [ 00:28:28.538 { 00:28:28.538 "name": "superblock_version", 00:28:28.538 "value": 5, 00:28:28.538 "read-only": true 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "name": "base_device", 00:28:28.538 "bands": [ 00:28:28.538 { 00:28:28.538 "id": 0, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 1, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 2, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 3, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 4, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 5, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 6, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 7, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 8, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 9, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 10, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 11, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 12, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 13, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 14, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 15, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 16, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 17, 00:28:28.538 "state": "FREE", 00:28:28.538 "validity": 0.0 00:28:28.538 } 00:28:28.538 ], 00:28:28.538 "read-only": true 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "name": "cache_device", 00:28:28.538 "type": "bdev", 00:28:28.538 "chunks": [ 00:28:28.538 { 00:28:28.538 "id": 0, 00:28:28.538 "state": "INACTIVE", 00:28:28.538 "utilization": 0.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 1, 00:28:28.538 "state": "CLOSED", 00:28:28.538 "utilization": 1.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 2, 00:28:28.538 "state": "CLOSED", 00:28:28.538 "utilization": 1.0 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 3, 00:28:28.538 "state": "OPEN", 00:28:28.538 "utilization": 0.001953125 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "id": 4, 00:28:28.538 "state": "OPEN", 00:28:28.538 "utilization": 0.0 00:28:28.538 } 00:28:28.538 ], 00:28:28.538 "read-only": true 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "name": "verbose_mode", 00:28:28.538 "value": true, 00:28:28.538 "unit": "", 00:28:28.538 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:28.538 }, 00:28:28.538 { 00:28:28.538 "name": "prep_upgrade_on_shutdown", 00:28:28.538 "value": false, 00:28:28.538 "unit": "", 00:28:28.538 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:28.538 } 00:28:28.538 ] 00:28:28.538 } 00:28:28.797 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:28.797 [2024-07-24 05:17:43.424922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.797 [2024-07-24 05:17:43.425005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:28.797 [2024-07-24 05:17:43.425025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:28.797 [2024-07-24 05:17:43.425035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.797 [2024-07-24 05:17:43.425067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.797 [2024-07-24 05:17:43.425080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:28.797 [2024-07-24 05:17:43.425106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:28.797 [2024-07-24 05:17:43.425116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.797 [2024-07-24 05:17:43.425141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.797 [2024-07-24 05:17:43.425154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:28.797 [2024-07-24 05:17:43.425164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:28.797 [2024-07-24 05:17:43.425174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.797 [2024-07-24 05:17:43.425257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.305 ms, result 0 00:28:29.055 true 00:28:29.055 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:29.055 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:29.055 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:29.313 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:29.313 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:29.313 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:29.572 [2024-07-24 05:17:43.955417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.572 [2024-07-24 05:17:43.955509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:29.572 [2024-07-24 05:17:43.955528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:29.572 [2024-07-24 05:17:43.955539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.572 [2024-07-24 05:17:43.955571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.572 [2024-07-24 05:17:43.955585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:29.572 [2024-07-24 05:17:43.955597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:29.572 [2024-07-24 05:17:43.955608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.572 [2024-07-24 05:17:43.955633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.572 [2024-07-24 05:17:43.955647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:29.572 [2024-07-24 05:17:43.955658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:29.572 [2024-07-24 05:17:43.955669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.572 [2024-07-24 05:17:43.955739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.305 ms, result 0 00:28:29.572 true 00:28:29.572 05:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:29.572 { 00:28:29.572 "name": "ftl", 00:28:29.572 "properties": [ 00:28:29.572 { 00:28:29.572 "name": "superblock_version", 00:28:29.572 "value": 5, 00:28:29.572 "read-only": true 00:28:29.572 }, 00:28:29.573 { 00:28:29.573 "name": "base_device", 00:28:29.573 "bands": [ 00:28:29.573 { 00:28:29.573 "id": 0, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 1, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 2, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 3, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 4, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 5, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 6, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 7, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 8, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 9, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 10, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 11, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 12, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 13, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 14, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 15, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 16, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 17, 00:28:29.573 "state": "FREE", 00:28:29.573 "validity": 0.0 00:28:29.573 } 00:28:29.573 ], 00:28:29.573 "read-only": true 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "name": "cache_device", 00:28:29.573 "type": "bdev", 00:28:29.573 "chunks": [ 00:28:29.573 { 00:28:29.573 "id": 0, 00:28:29.573 "state": "INACTIVE", 00:28:29.573 "utilization": 0.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 1, 00:28:29.573 "state": "CLOSED", 00:28:29.573 "utilization": 1.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 2, 00:28:29.573 "state": "CLOSED", 00:28:29.573 "utilization": 1.0 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 3, 00:28:29.573 "state": "OPEN", 00:28:29.573 "utilization": 0.001953125 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "id": 4, 00:28:29.573 "state": "OPEN", 00:28:29.573 "utilization": 0.0 00:28:29.573 } 00:28:29.573 ], 00:28:29.573 "read-only": true 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "name": "verbose_mode", 00:28:29.573 "value": true, 00:28:29.573 "unit": "", 00:28:29.573 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:29.573 }, 00:28:29.573 { 00:28:29.573 "name": "prep_upgrade_on_shutdown", 00:28:29.573 "value": true, 00:28:29.573 "unit": "", 00:28:29.573 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:29.573 } 00:28:29.573 ] 00:28:29.573 } 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85553 ]] 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85553 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85553 ']' 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85553 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.573 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85553 00:28:29.832 killing process with pid 85553 00:28:29.832 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.832 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.832 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85553' 00:28:29.832 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85553 00:28:29.832 05:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85553 00:28:30.768 [2024-07-24 05:17:45.033785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:30.768 [2024-07-24 05:17:45.048295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.768 [2024-07-24 05:17:45.048336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:30.768 [2024-07-24 05:17:45.048370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:30.768 [2024-07-24 05:17:45.048380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.768 [2024-07-24 05:17:45.048406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:30.768 [2024-07-24 05:17:45.051324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.768 [2024-07-24 05:17:45.051351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:30.768 [2024-07-24 05:17:45.051386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.900 ms 00:28:30.768 [2024-07-24 05:17:45.051395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.506209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.506306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:38.910 [2024-07-24 05:17:53.506343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8454.843 ms 00:28:38.910 [2024-07-24 05:17:53.506354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.507701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.507728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:38.910 [2024-07-24 05:17:53.507743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.324 ms 00:28:38.910 [2024-07-24 05:17:53.507754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.509110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.509142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:38.910 [2024-07-24 05:17:53.509162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.316 ms 00:28:38.910 [2024-07-24 05:17:53.509173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.521370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.521405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:38.910 [2024-07-24 05:17:53.521435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.158 ms 00:28:38.910 [2024-07-24 05:17:53.521445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.528592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.528632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:38.910 [2024-07-24 05:17:53.528663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.094 ms 00:28:38.910 [2024-07-24 05:17:53.528673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.528785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.528805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:38.910 [2024-07-24 05:17:53.528817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:28:38.910 [2024-07-24 05:17:53.528827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.910 [2024-07-24 05:17:53.540092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.910 [2024-07-24 05:17:53.540126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:38.910 [2024-07-24 05:17:53.540156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.213 ms 00:28:38.910 [2024-07-24 05:17:53.540167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.551335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.551370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:39.169 [2024-07-24 05:17:53.551398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.121 ms 00:28:39.169 [2024-07-24 05:17:53.551408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.562213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.562262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:39.169 [2024-07-24 05:17:53.562291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.746 ms 00:28:39.169 [2024-07-24 05:17:53.562300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.573073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.573107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:39.169 [2024-07-24 05:17:53.573136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.690 ms 00:28:39.169 [2024-07-24 05:17:53.573145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.573180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:39.169 [2024-07-24 05:17:53.573200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:39.169 [2024-07-24 05:17:53.573213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:39.169 [2024-07-24 05:17:53.573224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:39.169 [2024-07-24 05:17:53.573234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:39.169 [2024-07-24 05:17:53.573406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:39.169 [2024-07-24 05:17:53.573416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 915daee4-481d-4ec6-a3d6-43c6817ef5a6 00:28:39.169 [2024-07-24 05:17:53.573441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:39.169 [2024-07-24 05:17:53.573450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:39.169 [2024-07-24 05:17:53.573465] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:39.169 [2024-07-24 05:17:53.573492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:39.169 [2024-07-24 05:17:53.573502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:39.169 [2024-07-24 05:17:53.573512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:39.169 [2024-07-24 05:17:53.573522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:39.169 [2024-07-24 05:17:53.573531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:39.169 [2024-07-24 05:17:53.573541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:39.169 [2024-07-24 05:17:53.573552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.573562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:39.169 [2024-07-24 05:17:53.573574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:28:39.169 [2024-07-24 05:17:53.573584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.589531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.589579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:39.169 [2024-07-24 05:17:53.589597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.907 ms 00:28:39.169 [2024-07-24 05:17:53.589609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.590109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.169 [2024-07-24 05:17:53.590127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:39.169 [2024-07-24 05:17:53.590139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:28:39.169 [2024-07-24 05:17:53.590164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.642478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.169 [2024-07-24 05:17:53.642528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:39.169 [2024-07-24 05:17:53.642545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.169 [2024-07-24 05:17:53.642555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.169 [2024-07-24 05:17:53.642611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.169 [2024-07-24 05:17:53.642624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:39.170 [2024-07-24 05:17:53.642635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.170 [2024-07-24 05:17:53.642650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.170 [2024-07-24 05:17:53.642772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.170 [2024-07-24 05:17:53.642796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:39.170 [2024-07-24 05:17:53.642807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.170 [2024-07-24 05:17:53.642818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.170 [2024-07-24 05:17:53.642890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.170 [2024-07-24 05:17:53.642907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:39.170 [2024-07-24 05:17:53.642918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.170 [2024-07-24 05:17:53.642929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.170 [2024-07-24 05:17:53.730049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.170 [2024-07-24 05:17:53.730110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:39.170 [2024-07-24 05:17:53.730128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.170 [2024-07-24 05:17:53.730139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.808840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.808937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:39.428 [2024-07-24 05:17:53.808955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.808966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:39.428 [2024-07-24 05:17:53.809100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:39.428 [2024-07-24 05:17:53.809263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:39.428 [2024-07-24 05:17:53.809434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:39.428 [2024-07-24 05:17:53.809540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:39.428 [2024-07-24 05:17:53.809636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.428 [2024-07-24 05:17:53.809742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:39.428 [2024-07-24 05:17:53.809754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.428 [2024-07-24 05:17:53.809764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.428 [2024-07-24 05:17:53.809926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8761.648 ms, result 0 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86132 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86132 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86132 ']' 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.614 05:17:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:43.614 [2024-07-24 05:17:57.541201] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:43.614 [2024-07-24 05:17:57.541356] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86132 ] 00:28:43.614 [2024-07-24 05:17:57.702026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.614 [2024-07-24 05:17:57.874740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.182 [2024-07-24 05:17:58.569033] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:44.182 [2024-07-24 05:17:58.569120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:44.182 [2024-07-24 05:17:58.715056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.715117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:44.182 [2024-07-24 05:17:58.715152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:44.182 [2024-07-24 05:17:58.715163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.715225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.715242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:44.182 [2024-07-24 05:17:58.715254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:44.182 [2024-07-24 05:17:58.715264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.715299] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:44.182 [2024-07-24 05:17:58.716476] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:44.182 [2024-07-24 05:17:58.716520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.716536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:44.182 [2024-07-24 05:17:58.716549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:28:44.182 [2024-07-24 05:17:58.716566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.717931] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:44.182 [2024-07-24 05:17:58.733332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.733368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:44.182 [2024-07-24 05:17:58.733399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.403 ms 00:28:44.182 [2024-07-24 05:17:58.733409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.733505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.733524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:44.182 [2024-07-24 05:17:58.733535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:44.182 [2024-07-24 05:17:58.733545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.737987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.738020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:44.182 [2024-07-24 05:17:58.738048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.346 ms 00:28:44.182 [2024-07-24 05:17:58.738057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.738132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.738150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:44.182 [2024-07-24 05:17:58.738164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:44.182 [2024-07-24 05:17:58.738174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.738230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.738245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:44.182 [2024-07-24 05:17:58.738255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:44.182 [2024-07-24 05:17:58.738264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.738295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:44.182 [2024-07-24 05:17:58.742081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.742113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:44.182 [2024-07-24 05:17:58.742141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.793 ms 00:28:44.182 [2024-07-24 05:17:58.742151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.742184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.742197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:44.182 [2024-07-24 05:17:58.742211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:44.182 [2024-07-24 05:17:58.742220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.742263] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:44.182 [2024-07-24 05:17:58.742291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:44.182 [2024-07-24 05:17:58.742327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:44.182 [2024-07-24 05:17:58.742344] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:44.182 [2024-07-24 05:17:58.742437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:44.182 [2024-07-24 05:17:58.742457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:44.182 [2024-07-24 05:17:58.742486] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:44.182 [2024-07-24 05:17:58.742498] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:44.182 [2024-07-24 05:17:58.742509] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:44.182 [2024-07-24 05:17:58.742519] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:44.182 [2024-07-24 05:17:58.742528] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:44.182 [2024-07-24 05:17:58.742538] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:44.182 [2024-07-24 05:17:58.742546] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:44.182 [2024-07-24 05:17:58.742557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.742566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:44.182 [2024-07-24 05:17:58.742576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:28:44.182 [2024-07-24 05:17:58.742590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.742683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.182 [2024-07-24 05:17:58.742701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:44.182 [2024-07-24 05:17:58.742713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:44.182 [2024-07-24 05:17:58.742723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.182 [2024-07-24 05:17:58.742822] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:44.182 [2024-07-24 05:17:58.742851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:44.182 [2024-07-24 05:17:58.742861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:44.182 [2024-07-24 05:17:58.742871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.742899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:44.182 [2024-07-24 05:17:58.742910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.742919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:44.182 [2024-07-24 05:17:58.742928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:44.182 [2024-07-24 05:17:58.742938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:44.182 [2024-07-24 05:17:58.742947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.742956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:44.182 [2024-07-24 05:17:58.742965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:44.182 [2024-07-24 05:17:58.742973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.742983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:44.182 [2024-07-24 05:17:58.742992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:44.182 [2024-07-24 05:17:58.743000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.743009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:44.182 [2024-07-24 05:17:58.743017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:44.182 [2024-07-24 05:17:58.743026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.182 [2024-07-24 05:17:58.743034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:44.182 [2024-07-24 05:17:58.743043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:44.182 [2024-07-24 05:17:58.743051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:44.182 [2024-07-24 05:17:58.743060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:44.182 [2024-07-24 05:17:58.743069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:44.182 [2024-07-24 05:17:58.743077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:44.182 [2024-07-24 05:17:58.743086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:44.182 [2024-07-24 05:17:58.743095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:44.182 [2024-07-24 05:17:58.743103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:44.182 [2024-07-24 05:17:58.743111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:44.183 [2024-07-24 05:17:58.743120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:44.183 [2024-07-24 05:17:58.743129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:44.183 [2024-07-24 05:17:58.743138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:44.183 [2024-07-24 05:17:58.743146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:44.183 [2024-07-24 05:17:58.743154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:44.183 [2024-07-24 05:17:58.743172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:44.183 [2024-07-24 05:17:58.743180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:44.183 [2024-07-24 05:17:58.743203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:44.183 [2024-07-24 05:17:58.743236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:44.183 [2024-07-24 05:17:58.743245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743253] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:44.183 [2024-07-24 05:17:58.743263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:44.183 [2024-07-24 05:17:58.743273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:44.183 [2024-07-24 05:17:58.743282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:44.183 [2024-07-24 05:17:58.743295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:44.183 [2024-07-24 05:17:58.743304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:44.183 [2024-07-24 05:17:58.743313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:44.183 [2024-07-24 05:17:58.743321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:44.183 [2024-07-24 05:17:58.743342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:44.183 [2024-07-24 05:17:58.743351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:44.183 [2024-07-24 05:17:58.743361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:44.183 [2024-07-24 05:17:58.743373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:44.183 [2024-07-24 05:17:58.743401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:44.183 [2024-07-24 05:17:58.743438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:44.183 [2024-07-24 05:17:58.743493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:44.183 [2024-07-24 05:17:58.743513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:44.183 [2024-07-24 05:17:58.743533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:44.183 [2024-07-24 05:17:58.743636] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:44.183 [2024-07-24 05:17:58.743649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:44.183 [2024-07-24 05:17:58.743674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:44.183 [2024-07-24 05:17:58.743686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:44.183 [2024-07-24 05:17:58.743698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:44.183 [2024-07-24 05:17:58.743710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.183 [2024-07-24 05:17:58.743722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:44.183 [2024-07-24 05:17:58.743733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:28:44.183 [2024-07-24 05:17:58.743751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.183 [2024-07-24 05:17:58.743886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:44.183 [2024-07-24 05:17:58.743922] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:46.726 [2024-07-24 05:18:00.851437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.851531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:46.726 [2024-07-24 05:18:00.851568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2107.564 ms 00:28:46.726 [2024-07-24 05:18:00.851588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.879598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.879656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:46.726 [2024-07-24 05:18:00.879692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.700 ms 00:28:46.726 [2024-07-24 05:18:00.879703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.879860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.879879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:46.726 [2024-07-24 05:18:00.879940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:46.726 [2024-07-24 05:18:00.879950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.914119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.914194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:46.726 [2024-07-24 05:18:00.914230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.110 ms 00:28:46.726 [2024-07-24 05:18:00.914243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.914318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.914334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:46.726 [2024-07-24 05:18:00.914361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:46.726 [2024-07-24 05:18:00.914374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.914861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.914881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:46.726 [2024-07-24 05:18:00.914894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:28:46.726 [2024-07-24 05:18:00.914906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.914988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.915007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:46.726 [2024-07-24 05:18:00.915020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:28:46.726 [2024-07-24 05:18:00.915032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.930918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.930957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:46.726 [2024-07-24 05:18:00.930989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.854 ms 00:28:46.726 [2024-07-24 05:18:00.931000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.726 [2024-07-24 05:18:00.945578] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:46.726 [2024-07-24 05:18:00.945618] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:46.726 [2024-07-24 05:18:00.945651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.726 [2024-07-24 05:18:00.945661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:46.726 [2024-07-24 05:18:00.945672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.515 ms 00:28:46.727 [2024-07-24 05:18:00.945682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:00.961460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:00.961513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:46.727 [2024-07-24 05:18:00.961544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.733 ms 00:28:46.727 [2024-07-24 05:18:00.961553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:00.974849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:00.974928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:46.727 [2024-07-24 05:18:00.974959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.246 ms 00:28:46.727 [2024-07-24 05:18:00.974968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:00.989324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:00.989360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:46.727 [2024-07-24 05:18:00.989391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.297 ms 00:28:46.727 [2024-07-24 05:18:00.989399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:00.990113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:00.990147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:46.727 [2024-07-24 05:18:00.990174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.605 ms 00:28:46.727 [2024-07-24 05:18:00.990184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.067057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.067125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:46.727 [2024-07-24 05:18:01.067160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.846 ms 00:28:46.727 [2024-07-24 05:18:01.067170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.077970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:46.727 [2024-07-24 05:18:01.078569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.078600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:46.727 [2024-07-24 05:18:01.078621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.318 ms 00:28:46.727 [2024-07-24 05:18:01.078633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.078754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.078791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:46.727 [2024-07-24 05:18:01.078805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:46.727 [2024-07-24 05:18:01.078816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.078950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.078971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:46.727 [2024-07-24 05:18:01.078988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:46.727 [2024-07-24 05:18:01.079005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.079041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.079057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:46.727 [2024-07-24 05:18:01.079068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:46.727 [2024-07-24 05:18:01.079079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.079149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:46.727 [2024-07-24 05:18:01.079168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.079178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:46.727 [2024-07-24 05:18:01.079189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:46.727 [2024-07-24 05:18:01.079199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.105743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.105781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:46.727 [2024-07-24 05:18:01.105812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.484 ms 00:28:46.727 [2024-07-24 05:18:01.105822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.105927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.727 [2024-07-24 05:18:01.105946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:46.727 [2024-07-24 05:18:01.105958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:46.727 [2024-07-24 05:18:01.105991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.727 [2024-07-24 05:18:01.107287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2391.695 ms, result 0 00:28:46.727 [2024-07-24 05:18:01.122174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.727 [2024-07-24 05:18:01.138161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:46.727 [2024-07-24 05:18:01.146438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:46.727 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.727 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:46.727 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:46.727 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:46.727 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:46.986 [2024-07-24 05:18:01.450593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.986 [2024-07-24 05:18:01.450653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:46.986 [2024-07-24 05:18:01.450687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:46.987 [2024-07-24 05:18:01.450698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.987 [2024-07-24 05:18:01.450730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.987 [2024-07-24 05:18:01.450745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:46.987 [2024-07-24 05:18:01.450755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.987 [2024-07-24 05:18:01.450765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.987 [2024-07-24 05:18:01.450789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.987 [2024-07-24 05:18:01.450801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:46.987 [2024-07-24 05:18:01.450812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:46.987 [2024-07-24 05:18:01.450829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.987 [2024-07-24 05:18:01.450931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.293 ms, result 0 00:28:46.987 true 00:28:46.987 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:47.245 { 00:28:47.245 "name": "ftl", 00:28:47.245 "properties": [ 00:28:47.245 { 00:28:47.245 "name": "superblock_version", 00:28:47.245 "value": 5, 00:28:47.245 "read-only": true 00:28:47.245 }, 00:28:47.245 { 00:28:47.245 "name": "base_device", 00:28:47.245 "bands": [ 00:28:47.245 { 00:28:47.245 "id": 0, 00:28:47.245 "state": "CLOSED", 00:28:47.245 "validity": 1.0 00:28:47.245 }, 00:28:47.245 { 00:28:47.245 "id": 1, 00:28:47.246 "state": "CLOSED", 00:28:47.246 "validity": 1.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 2, 00:28:47.246 "state": "CLOSED", 00:28:47.246 "validity": 0.007843137254901933 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 3, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 4, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 5, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 6, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 7, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 8, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 9, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 10, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 11, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 12, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 13, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 14, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 15, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 16, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 17, 00:28:47.246 "state": "FREE", 00:28:47.246 "validity": 0.0 00:28:47.246 } 00:28:47.246 ], 00:28:47.246 "read-only": true 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "name": "cache_device", 00:28:47.246 "type": "bdev", 00:28:47.246 "chunks": [ 00:28:47.246 { 00:28:47.246 "id": 0, 00:28:47.246 "state": "INACTIVE", 00:28:47.246 "utilization": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 1, 00:28:47.246 "state": "OPEN", 00:28:47.246 "utilization": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 2, 00:28:47.246 "state": "OPEN", 00:28:47.246 "utilization": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 3, 00:28:47.246 "state": "FREE", 00:28:47.246 "utilization": 0.0 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "id": 4, 00:28:47.246 "state": "FREE", 00:28:47.246 "utilization": 0.0 00:28:47.246 } 00:28:47.246 ], 00:28:47.246 "read-only": true 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "name": "verbose_mode", 00:28:47.246 "value": true, 00:28:47.246 "unit": "", 00:28:47.246 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:47.246 }, 00:28:47.246 { 00:28:47.246 "name": "prep_upgrade_on_shutdown", 00:28:47.246 "value": false, 00:28:47.246 "unit": "", 00:28:47.246 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:47.246 } 00:28:47.246 ] 00:28:47.246 } 00:28:47.246 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:47.246 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:47.246 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:47.505 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:47.505 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:47.505 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:47.505 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:47.505 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:47.764 Validate MD5 checksum, iteration 1 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.764 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:48.023 [2024-07-24 05:18:02.396373] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:48.023 [2024-07-24 05:18:02.396570] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86193 ] 00:28:48.023 [2024-07-24 05:18:02.556724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.282 [2024-07-24 05:18:02.774399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.499  Copying: 496/1024 [MB] (496 MBps) Copying: 980/1024 [MB] (484 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:28:52.499 00:28:52.499 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:52.499 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:54.429 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:54.429 Validate MD5 checksum, iteration 2 00:28:54.429 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=147d4511d2c9b626ebe5dac124c95e73 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 147d4511d2c9b626ebe5dac124c95e73 != \1\4\7\d\4\5\1\1\d\2\c\9\b\6\2\6\e\b\e\5\d\a\c\1\2\4\c\9\5\e\7\3 ]] 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:54.430 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:54.430 [2024-07-24 05:18:08.840712] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:28:54.430 [2024-07-24 05:18:08.840909] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86265 ] 00:28:54.430 [2024-07-24 05:18:09.007307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.689 [2024-07-24 05:18:09.171090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.317  Copying: 497/1024 [MB] (497 MBps) Copying: 962/1024 [MB] (465 MBps) Copying: 1024/1024 [MB] (average 480 MBps) 00:28:59.317 00:28:59.317 05:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:59.317 05:18:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=38098b371c6ae0248a6ed155250c8645 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 38098b371c6ae0248a6ed155250c8645 != \3\8\0\9\8\b\3\7\1\c\6\a\e\0\2\4\8\a\6\e\d\1\5\5\2\5\0\c\8\6\4\5 ]] 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86132 ]] 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86132 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:01.217 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86335 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:01.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86335 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86335 ']' 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.218 05:18:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:01.476 [2024-07-24 05:18:15.897926] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:29:01.476 [2024-07-24 05:18:15.898080] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86335 ] 00:29:01.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86132 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:01.476 [2024-07-24 05:18:16.056740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.735 [2024-07-24 05:18:16.223260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.671 [2024-07-24 05:18:16.977341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:02.671 [2024-07-24 05:18:16.977450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:02.671 [2024-07-24 05:18:17.124658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.124730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:02.671 [2024-07-24 05:18:17.124768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:02.671 [2024-07-24 05:18:17.124795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.124933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.124954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:02.671 [2024-07-24 05:18:17.124966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:02.671 [2024-07-24 05:18:17.124976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.125030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:02.671 [2024-07-24 05:18:17.126110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:02.671 [2024-07-24 05:18:17.126151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.126165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:02.671 [2024-07-24 05:18:17.126177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:29:02.671 [2024-07-24 05:18:17.126194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.126698] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:02.671 [2024-07-24 05:18:17.146455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.146512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:02.671 [2024-07-24 05:18:17.146552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.757 ms 00:29:02.671 [2024-07-24 05:18:17.146563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.158323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.158367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:02.671 [2024-07-24 05:18:17.158400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:02.671 [2024-07-24 05:18:17.158411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.158915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.158944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:02.671 [2024-07-24 05:18:17.158958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:29:02.671 [2024-07-24 05:18:17.158968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.159034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.159052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:02.671 [2024-07-24 05:18:17.159064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:02.671 [2024-07-24 05:18:17.159074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.159161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.159176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:02.671 [2024-07-24 05:18:17.159191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:02.671 [2024-07-24 05:18:17.159202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.159238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:02.671 [2024-07-24 05:18:17.163003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.163041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:02.671 [2024-07-24 05:18:17.163056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.775 ms 00:29:02.671 [2024-07-24 05:18:17.163067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.163109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.163126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:02.671 [2024-07-24 05:18:17.163138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:02.671 [2024-07-24 05:18:17.163149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.163197] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:02.671 [2024-07-24 05:18:17.163226] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:02.671 [2024-07-24 05:18:17.163270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:02.671 [2024-07-24 05:18:17.163290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:02.671 [2024-07-24 05:18:17.163389] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:02.671 [2024-07-24 05:18:17.163405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:02.671 [2024-07-24 05:18:17.163419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:02.671 [2024-07-24 05:18:17.163433] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:02.671 [2024-07-24 05:18:17.163458] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:02.671 [2024-07-24 05:18:17.163471] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:02.671 [2024-07-24 05:18:17.163486] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:02.671 [2024-07-24 05:18:17.163497] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:02.671 [2024-07-24 05:18:17.163508] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:02.671 [2024-07-24 05:18:17.163519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.163533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:02.671 [2024-07-24 05:18:17.163545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:29:02.671 [2024-07-24 05:18:17.163556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.163641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.671 [2024-07-24 05:18:17.163656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:02.671 [2024-07-24 05:18:17.163667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:29:02.671 [2024-07-24 05:18:17.163682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.671 [2024-07-24 05:18:17.163807] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:02.671 [2024-07-24 05:18:17.163824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:02.671 [2024-07-24 05:18:17.163836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.671 [2024-07-24 05:18:17.163848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.671 [2024-07-24 05:18:17.163893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:02.671 [2024-07-24 05:18:17.163907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:02.671 [2024-07-24 05:18:17.163919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:02.671 [2024-07-24 05:18:17.163930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:02.671 [2024-07-24 05:18:17.163942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:02.671 [2024-07-24 05:18:17.163953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.671 [2024-07-24 05:18:17.163963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:02.671 [2024-07-24 05:18:17.163974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:02.671 [2024-07-24 05:18:17.163985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.671 [2024-07-24 05:18:17.163995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:02.671 [2024-07-24 05:18:17.164007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:02.672 [2024-07-24 05:18:17.164018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:02.672 [2024-07-24 05:18:17.164040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:02.672 [2024-07-24 05:18:17.164051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:02.672 [2024-07-24 05:18:17.164073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:02.672 [2024-07-24 05:18:17.164105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:02.672 [2024-07-24 05:18:17.164136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:02.672 [2024-07-24 05:18:17.164167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:02.672 [2024-07-24 05:18:17.164198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:02.672 [2024-07-24 05:18:17.164230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:02.672 [2024-07-24 05:18:17.164262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:02.672 [2024-07-24 05:18:17.164293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:02.672 [2024-07-24 05:18:17.164303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164313] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:02.672 [2024-07-24 05:18:17.164325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:02.672 [2024-07-24 05:18:17.164336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.672 [2024-07-24 05:18:17.164364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:02.672 [2024-07-24 05:18:17.164375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:02.672 [2024-07-24 05:18:17.164400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:02.672 [2024-07-24 05:18:17.164412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:02.672 [2024-07-24 05:18:17.164422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:02.672 [2024-07-24 05:18:17.164433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:02.672 [2024-07-24 05:18:17.164446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:02.672 [2024-07-24 05:18:17.164465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:02.672 [2024-07-24 05:18:17.164490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:02.672 [2024-07-24 05:18:17.164525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:02.672 [2024-07-24 05:18:17.164537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:02.672 [2024-07-24 05:18:17.164548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:02.672 [2024-07-24 05:18:17.164560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:02.672 [2024-07-24 05:18:17.164643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:02.672 [2024-07-24 05:18:17.164655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.672 [2024-07-24 05:18:17.164680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:02.672 [2024-07-24 05:18:17.164691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:02.672 [2024-07-24 05:18:17.164703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:02.672 [2024-07-24 05:18:17.164716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.164728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:02.672 [2024-07-24 05:18:17.164740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:29:02.672 [2024-07-24 05:18:17.164756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.195870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.195983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:02.672 [2024-07-24 05:18:17.196004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.024 ms 00:29:02.672 [2024-07-24 05:18:17.196015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.196100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.196115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:02.672 [2024-07-24 05:18:17.196127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:02.672 [2024-07-24 05:18:17.196161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.230054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.230132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:02.672 [2024-07-24 05:18:17.230151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.798 ms 00:29:02.672 [2024-07-24 05:18:17.230161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.230259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.230274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:02.672 [2024-07-24 05:18:17.230286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:02.672 [2024-07-24 05:18:17.230295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.230469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.230486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:02.672 [2024-07-24 05:18:17.230513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:29:02.672 [2024-07-24 05:18:17.230524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.230577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.230595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:02.672 [2024-07-24 05:18:17.230622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:02.672 [2024-07-24 05:18:17.230648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.246984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.247038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:02.672 [2024-07-24 05:18:17.247082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.305 ms 00:29:02.672 [2024-07-24 05:18:17.247093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.247250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.247268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:02.672 [2024-07-24 05:18:17.247296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:02.672 [2024-07-24 05:18:17.247306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.277650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.277717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:02.672 [2024-07-24 05:18:17.277754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.316 ms 00:29:02.672 [2024-07-24 05:18:17.277780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.672 [2024-07-24 05:18:17.289255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.672 [2024-07-24 05:18:17.289301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:02.673 [2024-07-24 05:18:17.289335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:29:02.673 [2024-07-24 05:18:17.289345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.353708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.932 [2024-07-24 05:18:17.353776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:02.932 [2024-07-24 05:18:17.353826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.263 ms 00:29:02.932 [2024-07-24 05:18:17.353837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.354084] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:02.932 [2024-07-24 05:18:17.354284] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:02.932 [2024-07-24 05:18:17.354430] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:02.932 [2024-07-24 05:18:17.354556] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:02.932 [2024-07-24 05:18:17.354571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.932 [2024-07-24 05:18:17.354582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:02.932 [2024-07-24 05:18:17.354601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.625 ms 00:29:02.932 [2024-07-24 05:18:17.354612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.354750] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:02.932 [2024-07-24 05:18:17.354770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.932 [2024-07-24 05:18:17.354782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:02.932 [2024-07-24 05:18:17.354794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:02.932 [2024-07-24 05:18:17.354804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.371733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.932 [2024-07-24 05:18:17.371801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:02.932 [2024-07-24 05:18:17.371817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.900 ms 00:29:02.932 [2024-07-24 05:18:17.371828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.381886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.932 [2024-07-24 05:18:17.381921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:02.932 [2024-07-24 05:18:17.381951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:02.932 [2024-07-24 05:18:17.381965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.932 [2024-07-24 05:18:17.382157] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:03.499 [2024-07-24 05:18:17.972191] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:03.499 [2024-07-24 05:18:17.972435] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:04.066 [2024-07-24 05:18:18.528348] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:04.066 [2024-07-24 05:18:18.528487] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:04.066 [2024-07-24 05:18:18.528511] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:04.066 [2024-07-24 05:18:18.528529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.528542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:04.066 [2024-07-24 05:18:18.528559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1146.492 ms 00:29:04.066 [2024-07-24 05:18:18.528572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.528648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.528663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:04.066 [2024-07-24 05:18:18.528687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:04.066 [2024-07-24 05:18:18.528698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.541362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:04.066 [2024-07-24 05:18:18.541552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.541571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:04.066 [2024-07-24 05:18:18.541601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.821 ms 00:29:04.066 [2024-07-24 05:18:18.541612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.542455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.542488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:04.066 [2024-07-24 05:18:18.542520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.735 ms 00:29:04.066 [2024-07-24 05:18:18.542532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.544940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.544968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:04.066 [2024-07-24 05:18:18.544997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.348 ms 00:29:04.066 [2024-07-24 05:18:18.545007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.545053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.545068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:04.066 [2024-07-24 05:18:18.545079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:04.066 [2024-07-24 05:18:18.545088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.545201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.545220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:04.066 [2024-07-24 05:18:18.545231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:04.066 [2024-07-24 05:18:18.545241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.545266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.545278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:04.066 [2024-07-24 05:18:18.545289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:04.066 [2024-07-24 05:18:18.545299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.545336] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:04.066 [2024-07-24 05:18:18.545351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.545361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:04.066 [2024-07-24 05:18:18.545374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:04.066 [2024-07-24 05:18:18.545385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.545453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.066 [2024-07-24 05:18:18.545468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:04.066 [2024-07-24 05:18:18.545478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:04.066 [2024-07-24 05:18:18.545488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.066 [2024-07-24 05:18:18.546811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1421.589 ms, result 0 00:29:04.066 [2024-07-24 05:18:18.561973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.066 [2024-07-24 05:18:18.578032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:04.066 [2024-07-24 05:18:18.586836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:04.066 Validate MD5 checksum, iteration 1 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:04.066 05:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.325 [2024-07-24 05:18:18.723277] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:29:04.325 [2024-07-24 05:18:18.723741] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86369 ] 00:29:04.325 [2024-07-24 05:18:18.893314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.583 [2024-07-24 05:18:19.104600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.853  Copying: 471/1024 [MB] (471 MBps) Copying: 944/1024 [MB] (473 MBps) Copying: 1024/1024 [MB] (average 472 MBps) 00:29:09.853 00:29:09.853 05:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:09.853 05:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:11.756 Validate MD5 checksum, iteration 2 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=147d4511d2c9b626ebe5dac124c95e73 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 147d4511d2c9b626ebe5dac124c95e73 != \1\4\7\d\4\5\1\1\d\2\c\9\b\6\2\6\e\b\e\5\d\a\c\1\2\4\c\9\5\e\7\3 ]] 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:11.756 05:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:11.756 [2024-07-24 05:18:26.351105] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:29:11.756 [2024-07-24 05:18:26.351298] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86448 ] 00:29:12.014 [2024-07-24 05:18:26.526332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.273 [2024-07-24 05:18:26.741163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.396  Copying: 481/1024 [MB] (481 MBps) Copying: 961/1024 [MB] (480 MBps) Copying: 1024/1024 [MB] (average 471 MBps) 00:29:16.396 00:29:16.396 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:16.396 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=38098b371c6ae0248a6ed155250c8645 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 38098b371c6ae0248a6ed155250c8645 != \3\8\0\9\8\b\3\7\1\c\6\a\e\0\2\4\8\a\6\e\d\1\5\5\2\5\0\c\8\6\4\5 ]] 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86335 ]] 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86335 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86335 ']' 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86335 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86335 00:29:18.311 killing process with pid 86335 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86335' 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86335 00:29:18.311 05:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86335 00:29:19.251 [2024-07-24 05:18:33.787027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:19.251 [2024-07-24 05:18:33.803243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.803284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:19.251 [2024-07-24 05:18:33.803318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:19.251 [2024-07-24 05:18:33.803328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.803354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:19.251 [2024-07-24 05:18:33.806361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.806396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:19.251 [2024-07-24 05:18:33.806424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.973 ms 00:29:19.251 [2024-07-24 05:18:33.806434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.806627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.806643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:19.251 [2024-07-24 05:18:33.806664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:29:19.251 [2024-07-24 05:18:33.806674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.807985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.808036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:19.251 [2024-07-24 05:18:33.808051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.292 ms 00:29:19.251 [2024-07-24 05:18:33.808084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.809414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.809438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:19.251 [2024-07-24 05:18:33.809450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:29:19.251 [2024-07-24 05:18:33.809460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.822002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.822047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:19.251 [2024-07-24 05:18:33.822072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.464 ms 00:29:19.251 [2024-07-24 05:18:33.822084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.828785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.828826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:19.251 [2024-07-24 05:18:33.828870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.613 ms 00:29:19.251 [2024-07-24 05:18:33.828884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.828994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.829019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:19.251 [2024-07-24 05:18:33.829033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:19.251 [2024-07-24 05:18:33.829047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.839904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.839943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:19.251 [2024-07-24 05:18:33.839975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.836 ms 00:29:19.251 [2024-07-24 05:18:33.839984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.851059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.851094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:19.251 [2024-07-24 05:18:33.851125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.039 ms 00:29:19.251 [2024-07-24 05:18:33.851134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.862108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.862141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:19.251 [2024-07-24 05:18:33.862172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.938 ms 00:29:19.251 [2024-07-24 05:18:33.862181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.873435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.873468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:19.251 [2024-07-24 05:18:33.873499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.176 ms 00:29:19.251 [2024-07-24 05:18:33.873508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.251 [2024-07-24 05:18:33.873544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:19.251 [2024-07-24 05:18:33.873564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:19.251 [2024-07-24 05:18:33.873577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:19.251 [2024-07-24 05:18:33.873588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:19.251 [2024-07-24 05:18:33.873599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:19.251 [2024-07-24 05:18:33.873819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:19.251 [2024-07-24 05:18:33.873828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 915daee4-481d-4ec6-a3d6-43c6817ef5a6 00:29:19.251 [2024-07-24 05:18:33.873838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:19.251 [2024-07-24 05:18:33.873848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:19.251 [2024-07-24 05:18:33.873857] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:19.251 [2024-07-24 05:18:33.873867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:19.251 [2024-07-24 05:18:33.873876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:19.251 [2024-07-24 05:18:33.873899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:19.251 [2024-07-24 05:18:33.873914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:19.251 [2024-07-24 05:18:33.873923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:19.251 [2024-07-24 05:18:33.873932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:19.251 [2024-07-24 05:18:33.873941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.251 [2024-07-24 05:18:33.873951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:19.251 [2024-07-24 05:18:33.873961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:29:19.251 [2024-07-24 05:18:33.873971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.889451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.510 [2024-07-24 05:18:33.889482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:19.510 [2024-07-24 05:18:33.889512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.427 ms 00:29:19.510 [2024-07-24 05:18:33.889528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.889913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.510 [2024-07-24 05:18:33.889928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:19.510 [2024-07-24 05:18:33.889940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:29:19.510 [2024-07-24 05:18:33.889949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.931245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:33.931285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:19.510 [2024-07-24 05:18:33.931315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:33.931331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.931365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:33.931393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:19.510 [2024-07-24 05:18:33.931404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:33.931413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.931536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:33.931554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:19.510 [2024-07-24 05:18:33.931566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:33.931576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:33.931605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:33.931618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:19.510 [2024-07-24 05:18:33.931628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:33.931638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.008707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.008764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:19.510 [2024-07-24 05:18:34.008796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.008813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.077388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.077438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:19.510 [2024-07-24 05:18:34.077470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.077480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.077591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.077607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:19.510 [2024-07-24 05:18:34.077618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.077628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.077677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.077698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:19.510 [2024-07-24 05:18:34.077709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.077718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.077816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.077833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:19.510 [2024-07-24 05:18:34.077844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.077853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.077955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.077977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:19.510 [2024-07-24 05:18:34.078002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.078013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.078054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.078068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:19.510 [2024-07-24 05:18:34.078078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.078088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.078136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:19.510 [2024-07-24 05:18:34.078156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:19.510 [2024-07-24 05:18:34.078167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:19.510 [2024-07-24 05:18:34.078177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.510 [2024-07-24 05:18:34.078366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 275.026 ms, result 0 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:20.447 Remove shared memory files 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:20.447 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:20.706 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86132 00:29:20.706 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:20.706 05:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:20.706 ************************************ 00:29:20.706 END TEST ftl_upgrade_shutdown 00:29:20.706 ************************************ 00:29:20.706 00:29:20.706 real 1m30.739s 00:29:20.706 user 2m9.554s 00:29:20.706 sys 0m22.053s 00:29:20.706 05:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.706 05:18:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:20.706 05:18:35 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:20.706 05:18:35 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:20.706 05:18:35 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:29:20.706 05:18:35 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.706 05:18:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:20.706 ************************************ 00:29:20.706 START TEST ftl_restore_fast 00:29:20.706 ************************************ 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:20.706 * Looking for test storage... 00:29:20.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:20.706 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hGue1JPMb3 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=86608 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 86608 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 86608 ']' 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.707 05:18:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:20.966 [2024-07-24 05:18:35.347514] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:29:20.966 [2024-07-24 05:18:35.347667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86608 ] 00:29:20.966 [2024-07-24 05:18:35.502822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.224 [2024-07-24 05:18:35.663539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:21.792 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bdev_name=nvme0n1 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local bdev_info 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bs 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local nb 00:29:22.050 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:22.309 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:29:22.309 { 00:29:22.309 "name": "nvme0n1", 00:29:22.309 "aliases": [ 00:29:22.309 "b8a7324b-a692-41fc-bc19-7d9e562e89f6" 00:29:22.309 ], 00:29:22.309 "product_name": "NVMe disk", 00:29:22.309 "block_size": 4096, 00:29:22.309 "num_blocks": 1310720, 00:29:22.309 "uuid": "b8a7324b-a692-41fc-bc19-7d9e562e89f6", 00:29:22.309 "assigned_rate_limits": { 00:29:22.309 "rw_ios_per_sec": 0, 00:29:22.309 "rw_mbytes_per_sec": 0, 00:29:22.309 "r_mbytes_per_sec": 0, 00:29:22.309 "w_mbytes_per_sec": 0 00:29:22.309 }, 00:29:22.309 "claimed": true, 00:29:22.309 "claim_type": "read_many_write_one", 00:29:22.309 "zoned": false, 00:29:22.309 "supported_io_types": { 00:29:22.309 "read": true, 00:29:22.309 "write": true, 00:29:22.309 "unmap": true, 00:29:22.309 "flush": true, 00:29:22.309 "reset": true, 00:29:22.309 "nvme_admin": true, 00:29:22.309 "nvme_io": true, 00:29:22.309 "nvme_io_md": false, 00:29:22.309 "write_zeroes": true, 00:29:22.309 "zcopy": false, 00:29:22.309 "get_zone_info": false, 00:29:22.309 "zone_management": false, 00:29:22.309 "zone_append": false, 00:29:22.309 "compare": true, 00:29:22.309 "compare_and_write": false, 00:29:22.309 "abort": true, 00:29:22.309 "seek_hole": false, 00:29:22.309 "seek_data": false, 00:29:22.309 "copy": true, 00:29:22.309 "nvme_iov_md": false 00:29:22.309 }, 00:29:22.309 "driver_specific": { 00:29:22.309 "nvme": [ 00:29:22.309 { 00:29:22.309 "pci_address": "0000:00:11.0", 00:29:22.309 "trid": { 00:29:22.309 "trtype": "PCIe", 00:29:22.309 "traddr": "0000:00:11.0" 00:29:22.309 }, 00:29:22.309 "ctrlr_data": { 00:29:22.309 "cntlid": 0, 00:29:22.309 "vendor_id": "0x1b36", 00:29:22.309 "model_number": "QEMU NVMe Ctrl", 00:29:22.309 "serial_number": "12341", 00:29:22.309 "firmware_revision": "8.0.0", 00:29:22.309 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:22.309 "oacs": { 00:29:22.309 "security": 0, 00:29:22.309 "format": 1, 00:29:22.309 "firmware": 0, 00:29:22.309 "ns_manage": 1 00:29:22.309 }, 00:29:22.309 "multi_ctrlr": false, 00:29:22.309 "ana_reporting": false 00:29:22.309 }, 00:29:22.309 "vs": { 00:29:22.309 "nvme_version": "1.4" 00:29:22.309 }, 00:29:22.309 "ns_data": { 00:29:22.309 "id": 1, 00:29:22.309 "can_share": false 00:29:22.309 } 00:29:22.309 } 00:29:22.309 ], 00:29:22.309 "mp_policy": "active_passive" 00:29:22.309 } 00:29:22.309 } 00:29:22.309 ]' 00:29:22.309 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:29:22.309 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # bs=4096 00:29:22.309 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # nb=1310720 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # echo 5120 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:22.568 05:18:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:22.568 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=5163fdba-35a2-4bb0-9436-603272b3e9ec 00:29:22.568 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:22.568 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5163fdba-35a2-4bb0-9436-603272b3e9ec 00:29:22.827 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:23.086 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=434e22a3-534d-4807-99c5-f5e1b4a84693 00:29:23.086 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 434e22a3-534d-4807-99c5-f5e1b4a84693 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bdev_name=f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local bdev_info 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bs 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local nb 00:29:23.345 05:18:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:29:23.605 { 00:29:23.605 "name": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:23.605 "aliases": [ 00:29:23.605 "lvs/nvme0n1p0" 00:29:23.605 ], 00:29:23.605 "product_name": "Logical Volume", 00:29:23.605 "block_size": 4096, 00:29:23.605 "num_blocks": 26476544, 00:29:23.605 "uuid": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:23.605 "assigned_rate_limits": { 00:29:23.605 "rw_ios_per_sec": 0, 00:29:23.605 "rw_mbytes_per_sec": 0, 00:29:23.605 "r_mbytes_per_sec": 0, 00:29:23.605 "w_mbytes_per_sec": 0 00:29:23.605 }, 00:29:23.605 "claimed": false, 00:29:23.605 "zoned": false, 00:29:23.605 "supported_io_types": { 00:29:23.605 "read": true, 00:29:23.605 "write": true, 00:29:23.605 "unmap": true, 00:29:23.605 "flush": false, 00:29:23.605 "reset": true, 00:29:23.605 "nvme_admin": false, 00:29:23.605 "nvme_io": false, 00:29:23.605 "nvme_io_md": false, 00:29:23.605 "write_zeroes": true, 00:29:23.605 "zcopy": false, 00:29:23.605 "get_zone_info": false, 00:29:23.605 "zone_management": false, 00:29:23.605 "zone_append": false, 00:29:23.605 "compare": false, 00:29:23.605 "compare_and_write": false, 00:29:23.605 "abort": false, 00:29:23.605 "seek_hole": true, 00:29:23.605 "seek_data": true, 00:29:23.605 "copy": false, 00:29:23.605 "nvme_iov_md": false 00:29:23.605 }, 00:29:23.605 "driver_specific": { 00:29:23.605 "lvol": { 00:29:23.605 "lvol_store_uuid": "434e22a3-534d-4807-99c5-f5e1b4a84693", 00:29:23.605 "base_bdev": "nvme0n1", 00:29:23.605 "thin_provision": true, 00:29:23.605 "num_allocated_clusters": 0, 00:29:23.605 "snapshot": false, 00:29:23.605 "clone": false, 00:29:23.605 "esnap_clone": false 00:29:23.605 } 00:29:23.605 } 00:29:23.605 } 00:29:23.605 ]' 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # bs=4096 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # nb=26476544 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # echo 103424 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:23.605 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bdev_name=f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local bdev_info 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bs 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local nb 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:29:24.172 { 00:29:24.172 "name": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:24.172 "aliases": [ 00:29:24.172 "lvs/nvme0n1p0" 00:29:24.172 ], 00:29:24.172 "product_name": "Logical Volume", 00:29:24.172 "block_size": 4096, 00:29:24.172 "num_blocks": 26476544, 00:29:24.172 "uuid": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:24.172 "assigned_rate_limits": { 00:29:24.172 "rw_ios_per_sec": 0, 00:29:24.172 "rw_mbytes_per_sec": 0, 00:29:24.172 "r_mbytes_per_sec": 0, 00:29:24.172 "w_mbytes_per_sec": 0 00:29:24.172 }, 00:29:24.172 "claimed": false, 00:29:24.172 "zoned": false, 00:29:24.172 "supported_io_types": { 00:29:24.172 "read": true, 00:29:24.172 "write": true, 00:29:24.172 "unmap": true, 00:29:24.172 "flush": false, 00:29:24.172 "reset": true, 00:29:24.172 "nvme_admin": false, 00:29:24.172 "nvme_io": false, 00:29:24.172 "nvme_io_md": false, 00:29:24.172 "write_zeroes": true, 00:29:24.172 "zcopy": false, 00:29:24.172 "get_zone_info": false, 00:29:24.172 "zone_management": false, 00:29:24.172 "zone_append": false, 00:29:24.172 "compare": false, 00:29:24.172 "compare_and_write": false, 00:29:24.172 "abort": false, 00:29:24.172 "seek_hole": true, 00:29:24.172 "seek_data": true, 00:29:24.172 "copy": false, 00:29:24.172 "nvme_iov_md": false 00:29:24.172 }, 00:29:24.172 "driver_specific": { 00:29:24.172 "lvol": { 00:29:24.172 "lvol_store_uuid": "434e22a3-534d-4807-99c5-f5e1b4a84693", 00:29:24.172 "base_bdev": "nvme0n1", 00:29:24.172 "thin_provision": true, 00:29:24.172 "num_allocated_clusters": 0, 00:29:24.172 "snapshot": false, 00:29:24.172 "clone": false, 00:29:24.172 "esnap_clone": false 00:29:24.172 } 00:29:24.172 } 00:29:24.172 } 00:29:24.172 ]' 00:29:24.172 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # bs=4096 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # nb=26476544 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # echo 103424 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:24.430 05:18:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bdev_name=f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local bdev_info 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bs 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local nb 00:29:24.688 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f69ef152-c94d-49df-81d1-6c1eab35f1f0 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:29:24.947 { 00:29:24.947 "name": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:24.947 "aliases": [ 00:29:24.947 "lvs/nvme0n1p0" 00:29:24.947 ], 00:29:24.947 "product_name": "Logical Volume", 00:29:24.947 "block_size": 4096, 00:29:24.947 "num_blocks": 26476544, 00:29:24.947 "uuid": "f69ef152-c94d-49df-81d1-6c1eab35f1f0", 00:29:24.947 "assigned_rate_limits": { 00:29:24.947 "rw_ios_per_sec": 0, 00:29:24.947 "rw_mbytes_per_sec": 0, 00:29:24.947 "r_mbytes_per_sec": 0, 00:29:24.947 "w_mbytes_per_sec": 0 00:29:24.947 }, 00:29:24.947 "claimed": false, 00:29:24.947 "zoned": false, 00:29:24.947 "supported_io_types": { 00:29:24.947 "read": true, 00:29:24.947 "write": true, 00:29:24.947 "unmap": true, 00:29:24.947 "flush": false, 00:29:24.947 "reset": true, 00:29:24.947 "nvme_admin": false, 00:29:24.947 "nvme_io": false, 00:29:24.947 "nvme_io_md": false, 00:29:24.947 "write_zeroes": true, 00:29:24.947 "zcopy": false, 00:29:24.947 "get_zone_info": false, 00:29:24.947 "zone_management": false, 00:29:24.947 "zone_append": false, 00:29:24.947 "compare": false, 00:29:24.947 "compare_and_write": false, 00:29:24.947 "abort": false, 00:29:24.947 "seek_hole": true, 00:29:24.947 "seek_data": true, 00:29:24.947 "copy": false, 00:29:24.947 "nvme_iov_md": false 00:29:24.947 }, 00:29:24.947 "driver_specific": { 00:29:24.947 "lvol": { 00:29:24.947 "lvol_store_uuid": "434e22a3-534d-4807-99c5-f5e1b4a84693", 00:29:24.947 "base_bdev": "nvme0n1", 00:29:24.947 "thin_provision": true, 00:29:24.947 "num_allocated_clusters": 0, 00:29:24.947 "snapshot": false, 00:29:24.947 "clone": false, 00:29:24.947 "esnap_clone": false 00:29:24.947 } 00:29:24.947 } 00:29:24.947 } 00:29:24.947 ]' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # bs=4096 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # nb=26476544 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bdev_size=103424 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # echo 103424 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f69ef152-c94d-49df-81d1-6c1eab35f1f0 --l2p_dram_limit 10' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:24.947 05:18:39 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f69ef152-c94d-49df-81d1-6c1eab35f1f0 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:25.208 [2024-07-24 05:18:39.744833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.744962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:25.208 [2024-07-24 05:18:39.744984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:25.208 [2024-07-24 05:18:39.745009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.745104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.745125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:25.208 [2024-07-24 05:18:39.745138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:25.208 [2024-07-24 05:18:39.745151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.745179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:25.208 [2024-07-24 05:18:39.746245] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:25.208 [2024-07-24 05:18:39.746275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.746293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:25.208 [2024-07-24 05:18:39.746305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:29:25.208 [2024-07-24 05:18:39.746318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.746484] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:29:25.208 [2024-07-24 05:18:39.747629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.747672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:25.208 [2024-07-24 05:18:39.747692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:25.208 [2024-07-24 05:18:39.747706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.752638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.752676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:25.208 [2024-07-24 05:18:39.752710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.827 ms 00:29:25.208 [2024-07-24 05:18:39.752736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.752853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.752872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:25.208 [2024-07-24 05:18:39.752898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:29:25.208 [2024-07-24 05:18:39.752911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.752976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.752992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:25.208 [2024-07-24 05:18:39.753009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:25.208 [2024-07-24 05:18:39.753019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.753050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:25.208 [2024-07-24 05:18:39.757149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.757203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:25.208 [2024-07-24 05:18:39.757218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:29:25.208 [2024-07-24 05:18:39.757230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.757286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.208 [2024-07-24 05:18:39.757305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:25.208 [2024-07-24 05:18:39.757317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:25.208 [2024-07-24 05:18:39.757330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.208 [2024-07-24 05:18:39.757374] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:25.208 [2024-07-24 05:18:39.757514] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:25.208 [2024-07-24 05:18:39.757532] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:25.208 [2024-07-24 05:18:39.757550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:25.209 [2024-07-24 05:18:39.757564] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:25.209 [2024-07-24 05:18:39.757579] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:25.209 [2024-07-24 05:18:39.757590] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:25.209 [2024-07-24 05:18:39.757607] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:25.209 [2024-07-24 05:18:39.757617] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:25.209 [2024-07-24 05:18:39.757629] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:25.209 [2024-07-24 05:18:39.757639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.209 [2024-07-24 05:18:39.757652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:25.209 [2024-07-24 05:18:39.757664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:29:25.209 [2024-07-24 05:18:39.757676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.209 [2024-07-24 05:18:39.757754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.209 [2024-07-24 05:18:39.757770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:25.209 [2024-07-24 05:18:39.757782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:25.209 [2024-07-24 05:18:39.757797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.209 [2024-07-24 05:18:39.757923] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:25.209 [2024-07-24 05:18:39.757946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:25.209 [2024-07-24 05:18:39.757968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.209 [2024-07-24 05:18:39.757997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:25.209 [2024-07-24 05:18:39.758021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:25.209 [2024-07-24 05:18:39.758055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.209 [2024-07-24 05:18:39.758078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:25.209 [2024-07-24 05:18:39.758092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:25.209 [2024-07-24 05:18:39.758103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.209 [2024-07-24 05:18:39.758115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:25.209 [2024-07-24 05:18:39.758125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:25.209 [2024-07-24 05:18:39.758137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:25.209 [2024-07-24 05:18:39.758162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:25.209 [2024-07-24 05:18:39.758197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:25.209 [2024-07-24 05:18:39.758232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:25.209 [2024-07-24 05:18:39.758282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:25.209 [2024-07-24 05:18:39.758347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:25.209 [2024-07-24 05:18:39.758382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.209 [2024-07-24 05:18:39.758408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:25.209 [2024-07-24 05:18:39.758421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:25.209 [2024-07-24 05:18:39.758432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.209 [2024-07-24 05:18:39.758447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:25.209 [2024-07-24 05:18:39.758458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:25.209 [2024-07-24 05:18:39.758471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:25.209 [2024-07-24 05:18:39.758495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:25.209 [2024-07-24 05:18:39.758506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758518] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:25.209 [2024-07-24 05:18:39.758530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:25.209 [2024-07-24 05:18:39.758543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.209 [2024-07-24 05:18:39.758568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:25.209 [2024-07-24 05:18:39.758579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:25.209 [2024-07-24 05:18:39.758594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:25.209 [2024-07-24 05:18:39.758606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:25.209 [2024-07-24 05:18:39.758619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:25.209 [2024-07-24 05:18:39.758630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:25.209 [2024-07-24 05:18:39.758647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:25.209 [2024-07-24 05:18:39.758664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:25.209 [2024-07-24 05:18:39.758691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:25.209 [2024-07-24 05:18:39.758704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:25.209 [2024-07-24 05:18:39.758716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:25.209 [2024-07-24 05:18:39.758730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:25.209 [2024-07-24 05:18:39.758742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:25.209 [2024-07-24 05:18:39.758757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:25.209 [2024-07-24 05:18:39.758769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:25.209 [2024-07-24 05:18:39.758782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:25.209 [2024-07-24 05:18:39.758794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:25.209 [2024-07-24 05:18:39.758860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:25.209 [2024-07-24 05:18:39.758873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:25.209 [2024-07-24 05:18:39.758899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:25.209 [2024-07-24 05:18:39.758924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:25.209 [2024-07-24 05:18:39.758938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:25.209 [2024-07-24 05:18:39.758954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.209 [2024-07-24 05:18:39.758966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:25.209 [2024-07-24 05:18:39.758980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:29:25.209 [2024-07-24 05:18:39.758991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.209 [2024-07-24 05:18:39.759044] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:25.209 [2024-07-24 05:18:39.759060] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:27.743 [2024-07-24 05:18:41.778849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.778944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:27.743 [2024-07-24 05:18:41.778999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2019.806 ms 00:29:27.743 [2024-07-24 05:18:41.779012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.808559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.808614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.743 [2024-07-24 05:18:41.808651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.268 ms 00:29:27.743 [2024-07-24 05:18:41.808663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.808823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.808841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:27.743 [2024-07-24 05:18:41.808911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:27.743 [2024-07-24 05:18:41.808925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.841713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.841778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.743 [2024-07-24 05:18:41.841813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.706 ms 00:29:27.743 [2024-07-24 05:18:41.841824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.841922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.841939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.743 [2024-07-24 05:18:41.841958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:27.743 [2024-07-24 05:18:41.841970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.842375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.842393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.743 [2024-07-24 05:18:41.842408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:29:27.743 [2024-07-24 05:18:41.842419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.842568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.842587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.743 [2024-07-24 05:18:41.842617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:29:27.743 [2024-07-24 05:18:41.842629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.858424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.858470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.743 [2024-07-24 05:18:41.858492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.767 ms 00:29:27.743 [2024-07-24 05:18:41.858505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.743 [2024-07-24 05:18:41.870662] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:27.743 [2024-07-24 05:18:41.873377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.743 [2024-07-24 05:18:41.873425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:27.743 [2024-07-24 05:18:41.873441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.771 ms 00:29:27.744 [2024-07-24 05:18:41.873454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:41.960242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:41.960324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:27.744 [2024-07-24 05:18:41.960345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.748 ms 00:29:27.744 [2024-07-24 05:18:41.960374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:41.960596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:41.960619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:27.744 [2024-07-24 05:18:41.960632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:29:27.744 [2024-07-24 05:18:41.960648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:41.987582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:41.987644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:27.744 [2024-07-24 05:18:41.987662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.873 ms 00:29:27.744 [2024-07-24 05:18:41.987679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.014033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.014090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:27.744 [2024-07-24 05:18:42.014107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.307 ms 00:29:27.744 [2024-07-24 05:18:42.014120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.014789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.014825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:27.744 [2024-07-24 05:18:42.014879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:29:27.744 [2024-07-24 05:18:42.014897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.100262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.100342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:27.744 [2024-07-24 05:18:42.100377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.254 ms 00:29:27.744 [2024-07-24 05:18:42.100394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.128111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.128169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:27.744 [2024-07-24 05:18:42.128186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.668 ms 00:29:27.744 [2024-07-24 05:18:42.128198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.155207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.155266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:27.744 [2024-07-24 05:18:42.155298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.966 ms 00:29:27.744 [2024-07-24 05:18:42.155310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.182561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.182619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:27.744 [2024-07-24 05:18:42.182637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.210 ms 00:29:27.744 [2024-07-24 05:18:42.182649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.182698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.182719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:27.744 [2024-07-24 05:18:42.182732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:27.744 [2024-07-24 05:18:42.182747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.182897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.744 [2024-07-24 05:18:42.182924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:27.744 [2024-07-24 05:18:42.182937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:29:27.744 [2024-07-24 05:18:42.182950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.744 [2024-07-24 05:18:42.184122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2438.761 ms, result 0 00:29:27.744 { 00:29:27.744 "name": "ftl0", 00:29:27.744 "uuid": "cbfa6140-daca-4402-8cfb-8aeef1de4c65" 00:29:27.744 } 00:29:27.744 05:18:42 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:27.744 05:18:42 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:28.002 05:18:42 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:28.002 05:18:42 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:28.261 [2024-07-24 05:18:42.743558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.743822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:28.261 [2024-07-24 05:18:42.743993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:28.261 [2024-07-24 05:18:42.744121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.744186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:28.261 [2024-07-24 05:18:42.747575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.747741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:28.261 [2024-07-24 05:18:42.747908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:29:28.261 [2024-07-24 05:18:42.747967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.748387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.748553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:28.261 [2024-07-24 05:18:42.748706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:29:28.261 [2024-07-24 05:18:42.748764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.752076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.752246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:28.261 [2024-07-24 05:18:42.752372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:29:28.261 [2024-07-24 05:18:42.752533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.758850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.759040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:28.261 [2024-07-24 05:18:42.759164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:29:28.261 [2024-07-24 05:18:42.759221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.787409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.787648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:28.261 [2024-07-24 05:18:42.787764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.925 ms 00:29:28.261 [2024-07-24 05:18:42.787832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.805109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.805155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:28.261 [2024-07-24 05:18:42.805172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.068 ms 00:29:28.261 [2024-07-24 05:18:42.805186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.805351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.805375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:28.261 [2024-07-24 05:18:42.805389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:29:28.261 [2024-07-24 05:18:42.805401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.833742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.833788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:28.261 [2024-07-24 05:18:42.833805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.315 ms 00:29:28.261 [2024-07-24 05:18:42.833818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.862232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.862288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:28.261 [2024-07-24 05:18:42.862304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.339 ms 00:29:28.261 [2024-07-24 05:18:42.862317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.261 [2024-07-24 05:18:42.890174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.261 [2024-07-24 05:18:42.890248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:28.261 [2024-07-24 05:18:42.890265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.801 ms 00:29:28.261 [2024-07-24 05:18:42.890278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.522 [2024-07-24 05:18:42.918475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.522 [2024-07-24 05:18:42.918531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:28.522 [2024-07-24 05:18:42.918548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.058 ms 00:29:28.522 [2024-07-24 05:18:42.918561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.522 [2024-07-24 05:18:42.918606] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:28.522 [2024-07-24 05:18:42.918630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.918834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.919973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:28.522 [2024-07-24 05:18:42.920287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:28.523 [2024-07-24 05:18:42.920580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:28.523 [2024-07-24 05:18:42.920592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:29:28.523 [2024-07-24 05:18:42.920607] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:28.523 [2024-07-24 05:18:42.920618] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:28.523 [2024-07-24 05:18:42.920633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:28.523 [2024-07-24 05:18:42.920645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:28.523 [2024-07-24 05:18:42.920657] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:28.523 [2024-07-24 05:18:42.920672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:28.523 [2024-07-24 05:18:42.920686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:28.523 [2024-07-24 05:18:42.920697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:28.523 [2024-07-24 05:18:42.920709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:28.523 [2024-07-24 05:18:42.920721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.523 [2024-07-24 05:18:42.920734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:28.523 [2024-07-24 05:18:42.920747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:29:28.523 [2024-07-24 05:18:42.920778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.935130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.523 [2024-07-24 05:18:42.935170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:28.523 [2024-07-24 05:18:42.935202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.288 ms 00:29:28.523 [2024-07-24 05:18:42.935214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.935675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.523 [2024-07-24 05:18:42.935711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:28.523 [2024-07-24 05:18:42.935733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:29:28.523 [2024-07-24 05:18:42.935748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.979669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:42.979727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:28.523 [2024-07-24 05:18:42.979757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:42.979787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.979919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:42.979939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:28.523 [2024-07-24 05:18:42.979955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:42.979969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.980091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:42.980116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:28.523 [2024-07-24 05:18:42.980130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:42.980159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:42.980184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:42.980203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:28.523 [2024-07-24 05:18:42.980215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:42.980231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.062375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.062454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:28.523 [2024-07-24 05:18:43.062472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.062485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.138446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.138527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:28.523 [2024-07-24 05:18:43.138552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.138567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.138731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.138782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:28.523 [2024-07-24 05:18:43.138809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.138821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.138878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.138899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:28.523 [2024-07-24 05:18:43.138911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.138923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.139091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.139114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:28.523 [2024-07-24 05:18:43.139127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.139139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.139187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.139207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:28.523 [2024-07-24 05:18:43.139219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.139231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.139279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.139296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:28.523 [2024-07-24 05:18:43.139308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.139320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.139405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.523 [2024-07-24 05:18:43.139428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:28.523 [2024-07-24 05:18:43.139452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.523 [2024-07-24 05:18:43.139479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.523 [2024-07-24 05:18:43.139639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.045 ms, result 0 00:29:28.523 true 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 86608 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86608 ']' 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86608 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86608 00:29:28.783 killing process with pid 86608 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86608' 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 86608 00:29:28.783 05:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 86608 00:29:34.053 05:18:48 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:38.248 262144+0 records in 00:29:38.248 262144+0 records out 00:29:38.248 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.13711 s, 260 MB/s 00:29:38.248 05:18:52 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:40.151 05:18:54 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:40.151 [2024-07-24 05:18:54.502117] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:29:40.151 [2024-07-24 05:18:54.502257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86825 ] 00:29:40.151 [2024-07-24 05:18:54.666063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.410 [2024-07-24 05:18:54.875335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.669 [2024-07-24 05:18:55.202942] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:40.669 [2024-07-24 05:18:55.203052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:40.929 [2024-07-24 05:18:55.360662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.360716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:40.929 [2024-07-24 05:18:55.360765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:40.929 [2024-07-24 05:18:55.360776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.360834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.360866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.929 [2024-07-24 05:18:55.360877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:40.929 [2024-07-24 05:18:55.360908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.360943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:40.929 [2024-07-24 05:18:55.362010] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:40.929 [2024-07-24 05:18:55.362043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.362055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.929 [2024-07-24 05:18:55.362067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:29:40.929 [2024-07-24 05:18:55.362077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.363602] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:40.929 [2024-07-24 05:18:55.379503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.379549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:40.929 [2024-07-24 05:18:55.379567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:29:40.929 [2024-07-24 05:18:55.379579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.379654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.379677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:40.929 [2024-07-24 05:18:55.379690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:40.929 [2024-07-24 05:18:55.379701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.384462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.384503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.929 [2024-07-24 05:18:55.384534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:29:40.929 [2024-07-24 05:18:55.384544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.384634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.384652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.929 [2024-07-24 05:18:55.384664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:40.929 [2024-07-24 05:18:55.384688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.384757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.384773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:40.929 [2024-07-24 05:18:55.384784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:40.929 [2024-07-24 05:18:55.384793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.384823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:40.929 [2024-07-24 05:18:55.388949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.388983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.929 [2024-07-24 05:18:55.389012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:29:40.929 [2024-07-24 05:18:55.389027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.389063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.929 [2024-07-24 05:18:55.389077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:40.929 [2024-07-24 05:18:55.389087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:40.929 [2024-07-24 05:18:55.389097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.929 [2024-07-24 05:18:55.389137] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:40.929 [2024-07-24 05:18:55.389166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:40.929 [2024-07-24 05:18:55.389203] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:40.929 [2024-07-24 05:18:55.389224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:40.929 [2024-07-24 05:18:55.389313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:40.929 [2024-07-24 05:18:55.389327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:40.929 [2024-07-24 05:18:55.389355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:40.929 [2024-07-24 05:18:55.389367] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:40.929 [2024-07-24 05:18:55.389379] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:40.929 [2024-07-24 05:18:55.389390] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:40.929 [2024-07-24 05:18:55.389399] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:40.930 [2024-07-24 05:18:55.389408] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:40.930 [2024-07-24 05:18:55.389417] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:40.930 [2024-07-24 05:18:55.389432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.930 [2024-07-24 05:18:55.389442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:40.930 [2024-07-24 05:18:55.389451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:29:40.930 [2024-07-24 05:18:55.389461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.930 [2024-07-24 05:18:55.389536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.930 [2024-07-24 05:18:55.389549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:40.930 [2024-07-24 05:18:55.389560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:40.930 [2024-07-24 05:18:55.389569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.930 [2024-07-24 05:18:55.389663] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:40.930 [2024-07-24 05:18:55.389682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:40.930 [2024-07-24 05:18:55.389693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.930 [2024-07-24 05:18:55.389703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.389713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:40.930 [2024-07-24 05:18:55.389722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.389745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:40.930 [2024-07-24 05:18:55.389755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:40.930 [2024-07-24 05:18:55.389764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:40.930 [2024-07-24 05:18:55.389773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.930 [2024-07-24 05:18:55.389781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:40.930 [2024-07-24 05:18:55.389790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:40.930 [2024-07-24 05:18:55.389798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.930 [2024-07-24 05:18:55.389808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:40.930 [2024-07-24 05:18:55.389817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:40.930 [2024-07-24 05:18:55.389826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.389834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:40.930 [2024-07-24 05:18:55.389843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:40.930 [2024-07-24 05:18:55.389852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.389860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:40.930 [2024-07-24 05:18:55.390133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:40.930 [2024-07-24 05:18:55.390198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.930 [2024-07-24 05:18:55.390238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:40.930 [2024-07-24 05:18:55.390289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:40.930 [2024-07-24 05:18:55.390408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.930 [2024-07-24 05:18:55.390458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:40.930 [2024-07-24 05:18:55.390495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:40.930 [2024-07-24 05:18:55.390531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.930 [2024-07-24 05:18:55.390669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:40.930 [2024-07-24 05:18:55.390707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:40.930 [2024-07-24 05:18:55.390742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.930 [2024-07-24 05:18:55.390871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:40.930 [2024-07-24 05:18:55.390992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:40.930 [2024-07-24 05:18:55.391102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.930 [2024-07-24 05:18:55.391233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:40.930 [2024-07-24 05:18:55.391272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:40.930 [2024-07-24 05:18:55.391283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.930 [2024-07-24 05:18:55.391293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:40.930 [2024-07-24 05:18:55.391303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:40.930 [2024-07-24 05:18:55.391312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.391321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:40.930 [2024-07-24 05:18:55.391331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:40.930 [2024-07-24 05:18:55.391340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.391349] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:40.930 [2024-07-24 05:18:55.391360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:40.930 [2024-07-24 05:18:55.391371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.930 [2024-07-24 05:18:55.391381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.930 [2024-07-24 05:18:55.391391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:40.930 [2024-07-24 05:18:55.391401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:40.930 [2024-07-24 05:18:55.391410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:40.930 [2024-07-24 05:18:55.391419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:40.930 [2024-07-24 05:18:55.391428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:40.930 [2024-07-24 05:18:55.391438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:40.930 [2024-07-24 05:18:55.391479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:40.930 [2024-07-24 05:18:55.391510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:40.930 [2024-07-24 05:18:55.391535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:40.930 [2024-07-24 05:18:55.391546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:40.930 [2024-07-24 05:18:55.391557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:40.930 [2024-07-24 05:18:55.391568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:40.930 [2024-07-24 05:18:55.391579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:40.930 [2024-07-24 05:18:55.391589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:40.930 [2024-07-24 05:18:55.391600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:40.930 [2024-07-24 05:18:55.391611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:40.930 [2024-07-24 05:18:55.391622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:40.930 [2024-07-24 05:18:55.391679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:40.930 [2024-07-24 05:18:55.391699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:40.930 [2024-07-24 05:18:55.391723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:40.930 [2024-07-24 05:18:55.391734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:40.930 [2024-07-24 05:18:55.391745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:40.930 [2024-07-24 05:18:55.391758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.930 [2024-07-24 05:18:55.391810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:40.930 [2024-07-24 05:18:55.391821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:29:40.930 [2024-07-24 05:18:55.391831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.930 [2024-07-24 05:18:55.428615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.930 [2024-07-24 05:18:55.428917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.930 [2024-07-24 05:18:55.429052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.678 ms 00:29:40.930 [2024-07-24 05:18:55.429103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.930 [2024-07-24 05:18:55.429245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.930 [2024-07-24 05:18:55.429295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:40.930 [2024-07-24 05:18:55.429411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:40.930 [2024-07-24 05:18:55.429483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.930 [2024-07-24 05:18:55.467185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.467463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.931 [2024-07-24 05:18:55.467627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.488 ms 00:29:40.931 [2024-07-24 05:18:55.467680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.467887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.467945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.931 [2024-07-24 05:18:55.467986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:40.931 [2024-07-24 05:18:55.468070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.468526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.468708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.931 [2024-07-24 05:18:55.468837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:29:40.931 [2024-07-24 05:18:55.468947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.469238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.469417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.931 [2024-07-24 05:18:55.469524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:29:40.931 [2024-07-24 05:18:55.469634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.485284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.485530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.931 [2024-07-24 05:18:55.485675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:29:40.931 [2024-07-24 05:18:55.485725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.500607] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:40.931 [2024-07-24 05:18:55.500806] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:40.931 [2024-07-24 05:18:55.501013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.501278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:40.931 [2024-07-24 05:18:55.501334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.076 ms 00:29:40.931 [2024-07-24 05:18:55.501373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.527397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.527604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:40.931 [2024-07-24 05:18:55.527733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:29:40.931 [2024-07-24 05:18:55.527927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.541612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.541803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:40.931 [2024-07-24 05:18:55.541966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.592 ms 00:29:40.931 [2024-07-24 05:18:55.542017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.556234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.556454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:40.931 [2024-07-24 05:18:55.556480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.046 ms 00:29:40.931 [2024-07-24 05:18:55.556492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.931 [2024-07-24 05:18:55.557446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.931 [2024-07-24 05:18:55.557486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:40.931 [2024-07-24 05:18:55.557517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:29:40.931 [2024-07-24 05:18:55.557528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.628660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.628789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:41.190 [2024-07-24 05:18:55.628823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.101 ms 00:29:41.190 [2024-07-24 05:18:55.628839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.641267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:41.190 [2024-07-24 05:18:55.644014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.644047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:41.190 [2024-07-24 05:18:55.644077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.072 ms 00:29:41.190 [2024-07-24 05:18:55.644087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.644183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.644202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:41.190 [2024-07-24 05:18:55.644214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:41.190 [2024-07-24 05:18:55.644224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.644312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.644344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:41.190 [2024-07-24 05:18:55.644371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:41.190 [2024-07-24 05:18:55.644382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.644415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.644431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:41.190 [2024-07-24 05:18:55.644443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:41.190 [2024-07-24 05:18:55.644453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.644494] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:41.190 [2024-07-24 05:18:55.644512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.644528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:41.190 [2024-07-24 05:18:55.644539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:41.190 [2024-07-24 05:18:55.644549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.675236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.675485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:41.190 [2024-07-24 05:18:55.675612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.663 ms 00:29:41.190 [2024-07-24 05:18:55.675673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.675957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.190 [2024-07-24 05:18:55.676020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:41.190 [2024-07-24 05:18:55.676061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:41.190 [2024-07-24 05:18:55.676164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.190 [2024-07-24 05:18:55.677550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.350 ms, result 0 00:30:24.794  Copying: 22/1024 [MB] (22 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (22 MBps) Copying: 209/1024 [MB] (23 MBps) Copying: 232/1024 [MB] (22 MBps) Copying: 254/1024 [MB] (22 MBps) Copying: 277/1024 [MB] (22 MBps) Copying: 300/1024 [MB] (22 MBps) Copying: 324/1024 [MB] (23 MBps) Copying: 348/1024 [MB] (23 MBps) Copying: 371/1024 [MB] (23 MBps) Copying: 394/1024 [MB] (23 MBps) Copying: 417/1024 [MB] (23 MBps) Copying: 441/1024 [MB] (23 MBps) Copying: 464/1024 [MB] (23 MBps) Copying: 487/1024 [MB] (23 MBps) Copying: 511/1024 [MB] (24 MBps) Copying: 535/1024 [MB] (23 MBps) Copying: 558/1024 [MB] (23 MBps) Copying: 582/1024 [MB] (23 MBps) Copying: 606/1024 [MB] (23 MBps) Copying: 629/1024 [MB] (23 MBps) Copying: 652/1024 [MB] (23 MBps) Copying: 676/1024 [MB] (23 MBps) Copying: 700/1024 [MB] (24 MBps) Copying: 724/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 772/1024 [MB] (24 MBps) Copying: 796/1024 [MB] (23 MBps) Copying: 819/1024 [MB] (23 MBps) Copying: 842/1024 [MB] (23 MBps) Copying: 866/1024 [MB] (23 MBps) Copying: 890/1024 [MB] (24 MBps) Copying: 913/1024 [MB] (23 MBps) Copying: 937/1024 [MB] (23 MBps) Copying: 961/1024 [MB] (24 MBps) Copying: 985/1024 [MB] (24 MBps) Copying: 1009/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:19:39.275950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.794 [2024-07-24 05:19:39.276028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:24.794 [2024-07-24 05:19:39.276065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:24.794 [2024-07-24 05:19:39.276077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.794 [2024-07-24 05:19:39.276109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:24.794 [2024-07-24 05:19:39.279379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.794 [2024-07-24 05:19:39.279415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:24.794 [2024-07-24 05:19:39.279471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:30:24.794 [2024-07-24 05:19:39.279492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.794 [2024-07-24 05:19:39.281042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.794 [2024-07-24 05:19:39.281084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:24.794 [2024-07-24 05:19:39.281100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:30:24.794 [2024-07-24 05:19:39.281110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.794 [2024-07-24 05:19:39.281142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.794 [2024-07-24 05:19:39.281156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:24.794 [2024-07-24 05:19:39.281167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:24.794 [2024-07-24 05:19:39.281176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.794 [2024-07-24 05:19:39.281228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.794 [2024-07-24 05:19:39.281261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:24.794 [2024-07-24 05:19:39.281272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:24.794 [2024-07-24 05:19:39.281283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.794 [2024-07-24 05:19:39.281310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:24.794 [2024-07-24 05:19:39.281329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.281992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.282002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.282012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.282022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.282033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:24.794 [2024-07-24 05:19:39.282043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:24.795 [2024-07-24 05:19:39.282502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:24.795 [2024-07-24 05:19:39.282514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:30:24.795 [2024-07-24 05:19:39.282525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:24.795 [2024-07-24 05:19:39.282535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:24.795 [2024-07-24 05:19:39.282546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:24.795 [2024-07-24 05:19:39.282562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:24.795 [2024-07-24 05:19:39.282572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:24.795 [2024-07-24 05:19:39.282583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:24.795 [2024-07-24 05:19:39.282593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:24.795 [2024-07-24 05:19:39.282604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:24.795 [2024-07-24 05:19:39.282614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:24.795 [2024-07-24 05:19:39.282624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.795 [2024-07-24 05:19:39.282649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:24.795 [2024-07-24 05:19:39.282675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:30:24.795 [2024-07-24 05:19:39.282700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.298015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.795 [2024-07-24 05:19:39.298216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:24.795 [2024-07-24 05:19:39.298372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.295 ms 00:30:24.795 [2024-07-24 05:19:39.298545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.299088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.795 [2024-07-24 05:19:39.299270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:24.795 [2024-07-24 05:19:39.299435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:30:24.795 [2024-07-24 05:19:39.299585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.333388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:24.795 [2024-07-24 05:19:39.333626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:24.795 [2024-07-24 05:19:39.333758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:24.795 [2024-07-24 05:19:39.333935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.334048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:24.795 [2024-07-24 05:19:39.334174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:24.795 [2024-07-24 05:19:39.334322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:24.795 [2024-07-24 05:19:39.334377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.334568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:24.795 [2024-07-24 05:19:39.334640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:24.795 [2024-07-24 05:19:39.334822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:24.795 [2024-07-24 05:19:39.334857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.795 [2024-07-24 05:19:39.334886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:24.795 [2024-07-24 05:19:39.334900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:24.795 [2024-07-24 05:19:39.334911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:24.795 [2024-07-24 05:19:39.334921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.425524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.425790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.054 [2024-07-24 05:19:39.425965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.426115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.496981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.497230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:25.054 [2024-07-24 05:19:39.497378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.497507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.497663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.497778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:25.054 [2024-07-24 05:19:39.497799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.497817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.497922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.497941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:25.054 [2024-07-24 05:19:39.497953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.497964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.498071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.498093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:25.054 [2024-07-24 05:19:39.498106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.498116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.498164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.498182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:25.054 [2024-07-24 05:19:39.498193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.498203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.498244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.498273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:25.054 [2024-07-24 05:19:39.498299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.498309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.498360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.054 [2024-07-24 05:19:39.498388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:25.054 [2024-07-24 05:19:39.498399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.054 [2024-07-24 05:19:39.498409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.054 [2024-07-24 05:19:39.498552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 222.558 ms, result 0 00:30:25.989 00:30:25.989 00:30:25.989 05:19:40 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:26.247 [2024-07-24 05:19:40.706424] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:30:26.247 [2024-07-24 05:19:40.706599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87263 ] 00:30:26.247 [2024-07-24 05:19:40.875694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.506 [2024-07-24 05:19:41.042310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.765 [2024-07-24 05:19:41.328717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:26.765 [2024-07-24 05:19:41.328790] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:27.025 [2024-07-24 05:19:41.487170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.487220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:27.025 [2024-07-24 05:19:41.487271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:27.025 [2024-07-24 05:19:41.487281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.487339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.487355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.025 [2024-07-24 05:19:41.487367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:27.025 [2024-07-24 05:19:41.487379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.487410] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:27.025 [2024-07-24 05:19:41.488463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:27.025 [2024-07-24 05:19:41.488510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.488556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.025 [2024-07-24 05:19:41.488568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:30:27.025 [2024-07-24 05:19:41.488579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489050] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:27.025 [2024-07-24 05:19:41.489082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.489095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:27.025 [2024-07-24 05:19:41.489112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:27.025 [2024-07-24 05:19:41.489123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.489198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:27.025 [2024-07-24 05:19:41.489249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:27.025 [2024-07-24 05:19:41.489266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.489705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.025 [2024-07-24 05:19:41.489722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:30:27.025 [2024-07-24 05:19:41.489732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.489824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.025 [2024-07-24 05:19:41.489835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:27.025 [2024-07-24 05:19:41.489844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.489908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:27.025 [2024-07-24 05:19:41.489920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:27.025 [2024-07-24 05:19:41.489930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.489962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:27.025 [2024-07-24 05:19:41.494015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.494053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.025 [2024-07-24 05:19:41.494084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:30:27.025 [2024-07-24 05:19:41.494094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.494133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.494147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:27.025 [2024-07-24 05:19:41.494157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:27.025 [2024-07-24 05:19:41.494166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.025 [2024-07-24 05:19:41.494225] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:27.025 [2024-07-24 05:19:41.494254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:27.025 [2024-07-24 05:19:41.494292] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:27.025 [2024-07-24 05:19:41.494310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:27.025 [2024-07-24 05:19:41.494398] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:27.025 [2024-07-24 05:19:41.494412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:27.025 [2024-07-24 05:19:41.494424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:27.025 [2024-07-24 05:19:41.494437] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:27.025 [2024-07-24 05:19:41.494448] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:27.025 [2024-07-24 05:19:41.494458] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:27.025 [2024-07-24 05:19:41.494468] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:27.025 [2024-07-24 05:19:41.494481] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:27.025 [2024-07-24 05:19:41.494490] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:27.025 [2024-07-24 05:19:41.494499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.025 [2024-07-24 05:19:41.494509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:27.026 [2024-07-24 05:19:41.494518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:30:27.026 [2024-07-24 05:19:41.494528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.494608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.494621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:27.026 [2024-07-24 05:19:41.494631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:27.026 [2024-07-24 05:19:41.494641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.494742] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:27.026 [2024-07-24 05:19:41.494756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:27.026 [2024-07-24 05:19:41.494766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.026 [2024-07-24 05:19:41.494776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.494786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:27.026 [2024-07-24 05:19:41.494795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.494804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:27.026 [2024-07-24 05:19:41.494814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:27.026 [2024-07-24 05:19:41.494823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:27.026 [2024-07-24 05:19:41.494832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.026 [2024-07-24 05:19:41.494841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:27.026 [2024-07-24 05:19:41.494866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:27.026 [2024-07-24 05:19:41.494914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.026 [2024-07-24 05:19:41.494924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:27.026 [2024-07-24 05:19:41.494934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:27.026 [2024-07-24 05:19:41.494944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.494954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:27.026 [2024-07-24 05:19:41.494964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:27.026 [2024-07-24 05:19:41.494989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.494999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:27.026 [2024-07-24 05:19:41.495008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:27.026 [2024-07-24 05:19:41.495050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:27.026 [2024-07-24 05:19:41.495079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:27.026 [2024-07-24 05:19:41.495107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:27.026 [2024-07-24 05:19:41.495135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.026 [2024-07-24 05:19:41.495154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:27.026 [2024-07-24 05:19:41.495183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:27.026 [2024-07-24 05:19:41.495200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.026 [2024-07-24 05:19:41.495217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:27.026 [2024-07-24 05:19:41.495235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:27.026 [2024-07-24 05:19:41.495269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:27.026 [2024-07-24 05:19:41.495335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:27.026 [2024-07-24 05:19:41.495346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495355] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:27.026 [2024-07-24 05:19:41.495366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:27.026 [2024-07-24 05:19:41.495376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.026 [2024-07-24 05:19:41.495396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:27.026 [2024-07-24 05:19:41.495408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:27.026 [2024-07-24 05:19:41.495417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:27.026 [2024-07-24 05:19:41.495427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:27.026 [2024-07-24 05:19:41.495436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:27.026 [2024-07-24 05:19:41.495473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:27.026 [2024-07-24 05:19:41.495487] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:27.026 [2024-07-24 05:19:41.495512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:27.026 [2024-07-24 05:19:41.495545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:27.026 [2024-07-24 05:19:41.495556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:27.026 [2024-07-24 05:19:41.495572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:27.026 [2024-07-24 05:19:41.495591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:27.026 [2024-07-24 05:19:41.495611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:27.026 [2024-07-24 05:19:41.495631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:27.026 [2024-07-24 05:19:41.495654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:27.026 [2024-07-24 05:19:41.495674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:27.026 [2024-07-24 05:19:41.495689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:27.026 [2024-07-24 05:19:41.495747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:27.026 [2024-07-24 05:19:41.495760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:27.026 [2024-07-24 05:19:41.495799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:27.026 [2024-07-24 05:19:41.495826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:27.026 [2024-07-24 05:19:41.495873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:27.026 [2024-07-24 05:19:41.495886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.495897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:27.026 [2024-07-24 05:19:41.495908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:30:27.026 [2024-07-24 05:19:41.495918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.529106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.529161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.026 [2024-07-24 05:19:41.529197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.115 ms 00:30:27.026 [2024-07-24 05:19:41.529223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.529327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.529343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:27.026 [2024-07-24 05:19:41.529354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:27.026 [2024-07-24 05:19:41.529363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.562205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.562255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.026 [2024-07-24 05:19:41.562290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.755 ms 00:30:27.026 [2024-07-24 05:19:41.562300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.026 [2024-07-24 05:19:41.562361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.026 [2024-07-24 05:19:41.562381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.027 [2024-07-24 05:19:41.562392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:27.027 [2024-07-24 05:19:41.562402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.562542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.562559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.027 [2024-07-24 05:19:41.562571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:27.027 [2024-07-24 05:19:41.562580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.562711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.562727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.027 [2024-07-24 05:19:41.562742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:27.027 [2024-07-24 05:19:41.562751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.577054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.577090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.027 [2024-07-24 05:19:41.577126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.281 ms 00:30:27.027 [2024-07-24 05:19:41.577136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.577282] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:27.027 [2024-07-24 05:19:41.577304] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:27.027 [2024-07-24 05:19:41.577316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.577327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:27.027 [2024-07-24 05:19:41.577337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:27.027 [2024-07-24 05:19:41.577350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.589098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.589132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:27.027 [2024-07-24 05:19:41.589161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.727 ms 00:30:27.027 [2024-07-24 05:19:41.589170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.589277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.589292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:27.027 [2024-07-24 05:19:41.589303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:27.027 [2024-07-24 05:19:41.589312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.589390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.589408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:27.027 [2024-07-24 05:19:41.589419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:27.027 [2024-07-24 05:19:41.589429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.590126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.590153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:27.027 [2024-07-24 05:19:41.590166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:30:27.027 [2024-07-24 05:19:41.590181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.590254] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:27.027 [2024-07-24 05:19:41.590279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.590342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:27.027 [2024-07-24 05:19:41.590366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:27.027 [2024-07-24 05:19:41.590376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.601720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:27.027 [2024-07-24 05:19:41.602003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.602024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:27.027 [2024-07-24 05:19:41.602038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.599 ms 00:30:27.027 [2024-07-24 05:19:41.602049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.604190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.604223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:27.027 [2024-07-24 05:19:41.604273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:30:27.027 [2024-07-24 05:19:41.604283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.604380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.604398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:27.027 [2024-07-24 05:19:41.604410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:27.027 [2024-07-24 05:19:41.604419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.604447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.604461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:27.027 [2024-07-24 05:19:41.604477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:27.027 [2024-07-24 05:19:41.604486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.604517] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:27.027 [2024-07-24 05:19:41.604531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.604541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:27.027 [2024-07-24 05:19:41.604551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:27.027 [2024-07-24 05:19:41.604560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.631634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.631689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:27.027 [2024-07-24 05:19:41.631722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.049 ms 00:30:27.027 [2024-07-24 05:19:41.631733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.631838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.027 [2024-07-24 05:19:41.631890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:27.027 [2024-07-24 05:19:41.631906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:27.027 [2024-07-24 05:19:41.631917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.027 [2024-07-24 05:19:41.633224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.483 ms, result 0 00:31:11.594  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (22 MBps) Copying: 234/1024 [MB] (22 MBps) Copying: 256/1024 [MB] (22 MBps) Copying: 279/1024 [MB] (22 MBps) Copying: 302/1024 [MB] (22 MBps) Copying: 325/1024 [MB] (23 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 372/1024 [MB] (23 MBps) Copying: 396/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (23 MBps) Copying: 465/1024 [MB] (22 MBps) Copying: 488/1024 [MB] (23 MBps) Copying: 511/1024 [MB] (22 MBps) Copying: 534/1024 [MB] (23 MBps) Copying: 557/1024 [MB] (23 MBps) Copying: 581/1024 [MB] (23 MBps) Copying: 604/1024 [MB] (23 MBps) Copying: 627/1024 [MB] (23 MBps) Copying: 650/1024 [MB] (23 MBps) Copying: 673/1024 [MB] (23 MBps) Copying: 696/1024 [MB] (22 MBps) Copying: 719/1024 [MB] (23 MBps) Copying: 742/1024 [MB] (23 MBps) Copying: 765/1024 [MB] (22 MBps) Copying: 788/1024 [MB] (22 MBps) Copying: 811/1024 [MB] (23 MBps) Copying: 835/1024 [MB] (24 MBps) Copying: 858/1024 [MB] (22 MBps) Copying: 882/1024 [MB] (23 MBps) Copying: 905/1024 [MB] (23 MBps) Copying: 929/1024 [MB] (23 MBps) Copying: 952/1024 [MB] (22 MBps) Copying: 975/1024 [MB] (22 MBps) Copying: 998/1024 [MB] (23 MBps) Copying: 1021/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:20:26.002749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.594 [2024-07-24 05:20:26.002820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:11.594 [2024-07-24 05:20:26.002887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:11.594 [2024-07-24 05:20:26.002905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.594 [2024-07-24 05:20:26.002944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:11.594 [2024-07-24 05:20:26.006403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.594 [2024-07-24 05:20:26.006439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:11.594 [2024-07-24 05:20:26.006469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:31:11.594 [2024-07-24 05:20:26.006479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.594 [2024-07-24 05:20:26.006680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.594 [2024-07-24 05:20:26.006696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:11.594 [2024-07-24 05:20:26.006708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:31:11.594 [2024-07-24 05:20:26.006718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.594 [2024-07-24 05:20:26.006746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.594 [2024-07-24 05:20:26.006758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:11.594 [2024-07-24 05:20:26.006774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:11.594 [2024-07-24 05:20:26.006784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.594 [2024-07-24 05:20:26.006834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.594 [2024-07-24 05:20:26.006862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:11.594 [2024-07-24 05:20:26.006885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:11.594 [2024-07-24 05:20:26.006896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.594 [2024-07-24 05:20:26.006914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:11.594 [2024-07-24 05:20:26.006932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.006995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:11.594 [2024-07-24 05:20:26.007155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.007995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:11.595 [2024-07-24 05:20:26.008133] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:11.595 [2024-07-24 05:20:26.008160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:31:11.595 [2024-07-24 05:20:26.008175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:11.595 [2024-07-24 05:20:26.008185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:11.595 [2024-07-24 05:20:26.008215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:11.595 [2024-07-24 05:20:26.008232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:11.595 [2024-07-24 05:20:26.008249] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:11.595 [2024-07-24 05:20:26.008266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:11.595 [2024-07-24 05:20:26.008284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:11.596 [2024-07-24 05:20:26.008294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:11.596 [2024-07-24 05:20:26.008303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:11.596 [2024-07-24 05:20:26.008330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.596 [2024-07-24 05:20:26.008340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:11.596 [2024-07-24 05:20:26.008351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:31:11.596 [2024-07-24 05:20:26.008362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.022855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.596 [2024-07-24 05:20:26.022906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:11.596 [2024-07-24 05:20:26.022938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.471 ms 00:31:11.596 [2024-07-24 05:20:26.022948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.023354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.596 [2024-07-24 05:20:26.023383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:11.596 [2024-07-24 05:20:26.023397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:31:11.596 [2024-07-24 05:20:26.023407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.055309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.055348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:11.596 [2024-07-24 05:20:26.055379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.055389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.055467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.055498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:11.596 [2024-07-24 05:20:26.055509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.055518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.055616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.055635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:11.596 [2024-07-24 05:20:26.055646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.055656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.055676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.055688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:11.596 [2024-07-24 05:20:26.055698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.055708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.133994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.134053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:11.596 [2024-07-24 05:20:26.134087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.134097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.201968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:11.596 [2024-07-24 05:20:26.202053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:11.596 [2024-07-24 05:20:26.202159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:11.596 [2024-07-24 05:20:26.202270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:11.596 [2024-07-24 05:20:26.202392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:11.596 [2024-07-24 05:20:26.202466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:11.596 [2024-07-24 05:20:26.202539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.596 [2024-07-24 05:20:26.202606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:11.596 [2024-07-24 05:20:26.202615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.596 [2024-07-24 05:20:26.202624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.596 [2024-07-24 05:20:26.202742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 199.985 ms, result 0 00:31:12.533 00:31:12.533 00:31:12.533 05:20:27 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:14.435 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:14.435 05:20:29 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:14.694 [2024-07-24 05:20:29.145727] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:31:14.694 [2024-07-24 05:20:29.145926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87724 ] 00:31:14.694 [2024-07-24 05:20:29.315520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.953 [2024-07-24 05:20:29.497855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.212 [2024-07-24 05:20:29.766009] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.212 [2024-07-24 05:20:29.766096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.472 [2024-07-24 05:20:29.924624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.472 [2024-07-24 05:20:29.924677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.472 [2024-07-24 05:20:29.924714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:15.472 [2024-07-24 05:20:29.924724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.472 [2024-07-24 05:20:29.924783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.472 [2024-07-24 05:20:29.924815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.472 [2024-07-24 05:20:29.924827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:15.472 [2024-07-24 05:20:29.924840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.924910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.473 [2024-07-24 05:20:29.925961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.473 [2024-07-24 05:20:29.926019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.926050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.473 [2024-07-24 05:20:29.926061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:31:15.473 [2024-07-24 05:20:29.926071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.926502] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:15.473 [2024-07-24 05:20:29.926526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.926537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.473 [2024-07-24 05:20:29.926554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:15.473 [2024-07-24 05:20:29.926564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.926614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.926628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.473 [2024-07-24 05:20:29.926639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:15.473 [2024-07-24 05:20:29.926648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.927048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.927067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.473 [2024-07-24 05:20:29.927083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:31:15.473 [2024-07-24 05:20:29.927093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.927165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.927182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.473 [2024-07-24 05:20:29.927193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:15.473 [2024-07-24 05:20:29.927203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.927252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.927266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.473 [2024-07-24 05:20:29.927277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:15.473 [2024-07-24 05:20:29.927286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.927315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.473 [2024-07-24 05:20:29.931539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.931581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.473 [2024-07-24 05:20:29.931597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:31:15.473 [2024-07-24 05:20:29.931607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.931648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.931663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.473 [2024-07-24 05:20:29.931674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:15.473 [2024-07-24 05:20:29.931684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.931748] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.473 [2024-07-24 05:20:29.931792] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.473 [2024-07-24 05:20:29.931832] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.473 [2024-07-24 05:20:29.931864] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:15.473 [2024-07-24 05:20:29.931993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.473 [2024-07-24 05:20:29.932007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.473 [2024-07-24 05:20:29.932020] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:15.473 [2024-07-24 05:20:29.932034] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932046] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932056] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:15.473 [2024-07-24 05:20:29.932066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.473 [2024-07-24 05:20:29.932080] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.473 [2024-07-24 05:20:29.932090] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.473 [2024-07-24 05:20:29.932101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.932111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.473 [2024-07-24 05:20:29.932121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:31:15.473 [2024-07-24 05:20:29.932132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.932216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.473 [2024-07-24 05:20:29.932230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.473 [2024-07-24 05:20:29.932240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:15.473 [2024-07-24 05:20:29.932265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.473 [2024-07-24 05:20:29.932357] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.473 [2024-07-24 05:20:29.932373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.473 [2024-07-24 05:20:29.932383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.473 [2024-07-24 05:20:29.932429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.473 [2024-07-24 05:20:29.932459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.473 [2024-07-24 05:20:29.932477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.473 [2024-07-24 05:20:29.932486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:15.473 [2024-07-24 05:20:29.932495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.473 [2024-07-24 05:20:29.932504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.473 [2024-07-24 05:20:29.932513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:15.473 [2024-07-24 05:20:29.932522] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.473 [2024-07-24 05:20:29.932542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.473 [2024-07-24 05:20:29.932570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.473 [2024-07-24 05:20:29.932610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.473 [2024-07-24 05:20:29.932637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.473 [2024-07-24 05:20:29.932665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.473 [2024-07-24 05:20:29.932692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.473 [2024-07-24 05:20:29.932710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.473 [2024-07-24 05:20:29.932734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:15.473 [2024-07-24 05:20:29.932743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.473 [2024-07-24 05:20:29.932751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.473 [2024-07-24 05:20:29.932760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:15.473 [2024-07-24 05:20:29.932769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.473 [2024-07-24 05:20:29.932787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:15.473 [2024-07-24 05:20:29.932797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.473 [2024-07-24 05:20:29.932805] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.473 [2024-07-24 05:20:29.932815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.473 [2024-07-24 05:20:29.932825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.473 [2024-07-24 05:20:29.932834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.474 [2024-07-24 05:20:29.932843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.474 [2024-07-24 05:20:29.932854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.474 [2024-07-24 05:20:29.932863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.474 [2024-07-24 05:20:29.932872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.474 [2024-07-24 05:20:29.932881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.474 [2024-07-24 05:20:29.932890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.474 [2024-07-24 05:20:29.932917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.474 [2024-07-24 05:20:29.932932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.932947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:15.474 [2024-07-24 05:20:29.932958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:15.474 [2024-07-24 05:20:29.932967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:15.474 [2024-07-24 05:20:29.932977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:15.474 [2024-07-24 05:20:29.932987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:15.474 [2024-07-24 05:20:29.932996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:15.474 [2024-07-24 05:20:29.933006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:15.474 [2024-07-24 05:20:29.933016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:15.474 [2024-07-24 05:20:29.933025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:15.474 [2024-07-24 05:20:29.933034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:15.474 [2024-07-24 05:20:29.933083] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.474 [2024-07-24 05:20:29.933094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.474 [2024-07-24 05:20:29.933115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.474 [2024-07-24 05:20:29.933125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.474 [2024-07-24 05:20:29.933135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.474 [2024-07-24 05:20:29.933162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:29.933172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.474 [2024-07-24 05:20:29.933182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:31:15.474 [2024-07-24 05:20:29.933192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:29.970195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:29.970443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.474 [2024-07-24 05:20:29.970620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.951 ms 00:31:15.474 [2024-07-24 05:20:29.970754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:29.970967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:29.971055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.474 [2024-07-24 05:20:29.971193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:15.474 [2024-07-24 05:20:29.971326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.013649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.013974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.474 [2024-07-24 05:20:30.014163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.147 ms 00:31:15.474 [2024-07-24 05:20:30.014368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.014663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.014829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.474 [2024-07-24 05:20:30.014998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.474 [2024-07-24 05:20:30.015070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.015431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.015542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.474 [2024-07-24 05:20:30.015788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:15.474 [2024-07-24 05:20:30.016018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.016406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.016615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.474 [2024-07-24 05:20:30.016823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:31:15.474 [2024-07-24 05:20:30.017024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.039467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.039739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.474 [2024-07-24 05:20:30.039985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.320 ms 00:31:15.474 [2024-07-24 05:20:30.040088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.040449] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:15.474 [2024-07-24 05:20:30.040502] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.474 [2024-07-24 05:20:30.040530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.040552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.474 [2024-07-24 05:20:30.040581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:31:15.474 [2024-07-24 05:20:30.040615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.053285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.053324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.474 [2024-07-24 05:20:30.053354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.626 ms 00:31:15.474 [2024-07-24 05:20:30.053364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.053475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.053489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.474 [2024-07-24 05:20:30.053500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:15.474 [2024-07-24 05:20:30.053509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.053573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.053589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.474 [2024-07-24 05:20:30.053600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:15.474 [2024-07-24 05:20:30.053609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.054489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.054632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.474 [2024-07-24 05:20:30.054743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:31:15.474 [2024-07-24 05:20:30.054908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.054978] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:15.474 [2024-07-24 05:20:30.055119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.055153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.474 [2024-07-24 05:20:30.055166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:31:15.474 [2024-07-24 05:20:30.055177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.066612] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.474 [2024-07-24 05:20:30.066998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.067133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.474 [2024-07-24 05:20:30.067251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.789 ms 00:31:15.474 [2024-07-24 05:20:30.067371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.069566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.069738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.474 [2024-07-24 05:20:30.069896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:31:15.474 [2024-07-24 05:20:30.069946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.070173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.474 [2024-07-24 05:20:30.070235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.474 [2024-07-24 05:20:30.070295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:15.474 [2024-07-24 05:20:30.070382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.474 [2024-07-24 05:20:30.070448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.475 [2024-07-24 05:20:30.070492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.475 [2024-07-24 05:20:30.070537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.475 [2024-07-24 05:20:30.070572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.475 [2024-07-24 05:20:30.070792] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.475 [2024-07-24 05:20:30.070867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.475 [2024-07-24 05:20:30.070929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.475 [2024-07-24 05:20:30.070975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:15.475 [2024-07-24 05:20:30.071010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.475 [2024-07-24 05:20:30.097883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.475 [2024-07-24 05:20:30.098149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.475 [2024-07-24 05:20:30.098279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.816 ms 00:31:15.475 [2024-07-24 05:20:30.098348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.475 [2024-07-24 05:20:30.098482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.475 [2024-07-24 05:20:30.098552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.475 [2024-07-24 05:20:30.098682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:15.475 [2024-07-24 05:20:30.098729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.475 [2024-07-24 05:20:30.100131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 174.886 ms, result 0 00:31:59.156  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 120/1024 [MB] (24 MBps) Copying: 144/1024 [MB] (24 MBps) Copying: 168/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (23 MBps) Copying: 216/1024 [MB] (24 MBps) Copying: 241/1024 [MB] (24 MBps) Copying: 264/1024 [MB] (23 MBps) Copying: 288/1024 [MB] (23 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 336/1024 [MB] (24 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (24 MBps) Copying: 409/1024 [MB] (24 MBps) Copying: 432/1024 [MB] (23 MBps) Copying: 456/1024 [MB] (23 MBps) Copying: 480/1024 [MB] (24 MBps) Copying: 505/1024 [MB] (24 MBps) Copying: 528/1024 [MB] (22 MBps) Copying: 553/1024 [MB] (24 MBps) Copying: 576/1024 [MB] (23 MBps) Copying: 601/1024 [MB] (24 MBps) Copying: 626/1024 [MB] (25 MBps) Copying: 650/1024 [MB] (24 MBps) Copying: 676/1024 [MB] (25 MBps) Copying: 701/1024 [MB] (25 MBps) Copying: 726/1024 [MB] (24 MBps) Copying: 751/1024 [MB] (24 MBps) Copying: 774/1024 [MB] (23 MBps) Copying: 800/1024 [MB] (25 MBps) Copying: 825/1024 [MB] (24 MBps) Copying: 849/1024 [MB] (24 MBps) Copying: 873/1024 [MB] (23 MBps) Copying: 897/1024 [MB] (23 MBps) Copying: 920/1024 [MB] (23 MBps) Copying: 943/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (24 MBps) Copying: 993/1024 [MB] (24 MBps) Copying: 1017/1024 [MB] (24 MBps) Copying: 1048224/1048576 [kB] (6204 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:21:13.551338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.156 [2024-07-24 05:21:13.551616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:59.156 [2024-07-24 05:21:13.551750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:59.156 [2024-07-24 05:21:13.551804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.156 [2024-07-24 05:21:13.553398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:59.156 [2024-07-24 05:21:13.559189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.156 [2024-07-24 05:21:13.559245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:59.156 [2024-07-24 05:21:13.559276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.628 ms 00:31:59.156 [2024-07-24 05:21:13.559286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.156 [2024-07-24 05:21:13.570371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.156 [2024-07-24 05:21:13.570446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:59.156 [2024-07-24 05:21:13.570472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.484 ms 00:31:59.156 [2024-07-24 05:21:13.570483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.156 [2024-07-24 05:21:13.570517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.156 [2024-07-24 05:21:13.570532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:59.157 [2024-07-24 05:21:13.570543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:59.157 [2024-07-24 05:21:13.570553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.570621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-07-24 05:21:13.570635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:59.157 [2024-07-24 05:21:13.570646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:59.157 [2024-07-24 05:21:13.570660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.570696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:59.157 [2024-07-24 05:21:13.570712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:31:59.157 [2024-07-24 05:21:13.570726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.570996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:59.157 [2024-07-24 05:21:13.571991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:59.157 [2024-07-24 05:21:13.572003] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:31:59.157 [2024-07-24 05:21:13.572014] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:31:59.157 [2024-07-24 05:21:13.572024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130848 00:31:59.157 [2024-07-24 05:21:13.572046] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:31:59.157 [2024-07-24 05:21:13.572058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:59.157 [2024-07-24 05:21:13.572069] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:59.157 [2024-07-24 05:21:13.572080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:59.157 [2024-07-24 05:21:13.572097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:59.157 [2024-07-24 05:21:13.572107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:59.157 [2024-07-24 05:21:13.572117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:59.157 [2024-07-24 05:21:13.572128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-07-24 05:21:13.572139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:59.157 [2024-07-24 05:21:13.572151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:31:59.157 [2024-07-24 05:21:13.572161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.588652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-07-24 05:21:13.588695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:59.157 [2024-07-24 05:21:13.588728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.467 ms 00:31:59.157 [2024-07-24 05:21:13.588739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.589249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-07-24 05:21:13.589274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:59.157 [2024-07-24 05:21:13.589288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:31:59.157 [2024-07-24 05:21:13.589299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.625316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.157 [2024-07-24 05:21:13.625374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.157 [2024-07-24 05:21:13.625398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.157 [2024-07-24 05:21:13.625410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.625502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.157 [2024-07-24 05:21:13.625532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.157 [2024-07-24 05:21:13.625543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.157 [2024-07-24 05:21:13.625568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.625651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.157 [2024-07-24 05:21:13.625670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.157 [2024-07-24 05:21:13.625681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.157 [2024-07-24 05:21:13.625697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.625723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.157 [2024-07-24 05:21:13.625752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.157 [2024-07-24 05:21:13.625763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.157 [2024-07-24 05:21:13.625773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-07-24 05:21:13.726721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.157 [2024-07-24 05:21:13.726780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.157 [2024-07-24 05:21:13.726813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.157 [2024-07-24 05:21:13.726830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.415 [2024-07-24 05:21:13.805141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.415 [2024-07-24 05:21:13.805303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.415 [2024-07-24 05:21:13.805391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.415 [2024-07-24 05:21:13.805523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:59.415 [2024-07-24 05:21:13.805599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.415 [2024-07-24 05:21:13.805688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.415 [2024-07-24 05:21:13.805762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.415 [2024-07-24 05:21:13.805773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.415 [2024-07-24 05:21:13.805783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.415 [2024-07-24 05:21:13.805979] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 257.527 ms, result 0 00:32:00.791 00:32:00.791 00:32:00.791 05:21:15 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:00.791 [2024-07-24 05:21:15.349533] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:00.791 [2024-07-24 05:21:15.349694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88183 ] 00:32:01.050 [2024-07-24 05:21:15.518575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.050 [2024-07-24 05:21:15.676794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.619 [2024-07-24 05:21:15.943816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:01.619 [2024-07-24 05:21:15.943939] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:01.619 [2024-07-24 05:21:16.102686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.102740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:01.619 [2024-07-24 05:21:16.102775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:01.619 [2024-07-24 05:21:16.102785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.102842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.102891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.619 [2024-07-24 05:21:16.102904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:01.619 [2024-07-24 05:21:16.102919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.102952] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:01.619 [2024-07-24 05:21:16.103917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:01.619 [2024-07-24 05:21:16.103949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.103961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.619 [2024-07-24 05:21:16.103972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:32:01.619 [2024-07-24 05:21:16.103982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.104402] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:01.619 [2024-07-24 05:21:16.104431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.104442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:01.619 [2024-07-24 05:21:16.104459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:01.619 [2024-07-24 05:21:16.104469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.104520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.104534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:01.619 [2024-07-24 05:21:16.104544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:01.619 [2024-07-24 05:21:16.104553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.104985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.105009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.619 [2024-07-24 05:21:16.105025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:32:01.619 [2024-07-24 05:21:16.105035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.105122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.105140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.619 [2024-07-24 05:21:16.105151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:01.619 [2024-07-24 05:21:16.105161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.105193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.105207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:01.619 [2024-07-24 05:21:16.105218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:01.619 [2024-07-24 05:21:16.105227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.105258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:01.619 [2024-07-24 05:21:16.109335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.109369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.619 [2024-07-24 05:21:16.109399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:32:01.619 [2024-07-24 05:21:16.109409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.109446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.109460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:01.619 [2024-07-24 05:21:16.109470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:01.619 [2024-07-24 05:21:16.109480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.109538] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:01.619 [2024-07-24 05:21:16.109565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:01.619 [2024-07-24 05:21:16.109604] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:01.619 [2024-07-24 05:21:16.109622] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:01.619 [2024-07-24 05:21:16.109708] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:01.619 [2024-07-24 05:21:16.109722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:01.619 [2024-07-24 05:21:16.109734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:01.619 [2024-07-24 05:21:16.109748] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:01.619 [2024-07-24 05:21:16.109759] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:01.619 [2024-07-24 05:21:16.109769] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:01.619 [2024-07-24 05:21:16.109779] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:01.619 [2024-07-24 05:21:16.109792] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:01.619 [2024-07-24 05:21:16.109801] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:01.619 [2024-07-24 05:21:16.109811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.109821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:01.619 [2024-07-24 05:21:16.109831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:32:01.619 [2024-07-24 05:21:16.109841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.109934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.619 [2024-07-24 05:21:16.109949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:01.619 [2024-07-24 05:21:16.109960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:01.619 [2024-07-24 05:21:16.109970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.619 [2024-07-24 05:21:16.110076] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:01.620 [2024-07-24 05:21:16.110092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:01.620 [2024-07-24 05:21:16.110102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:01.620 [2024-07-24 05:21:16.110132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:01.620 [2024-07-24 05:21:16.110162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:01.620 [2024-07-24 05:21:16.110180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:01.620 [2024-07-24 05:21:16.110189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:01.620 [2024-07-24 05:21:16.110198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:01.620 [2024-07-24 05:21:16.110208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:01.620 [2024-07-24 05:21:16.110219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:01.620 [2024-07-24 05:21:16.110229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:01.620 [2024-07-24 05:21:16.110247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:01.620 [2024-07-24 05:21:16.110275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:01.620 [2024-07-24 05:21:16.110316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:01.620 [2024-07-24 05:21:16.110344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:01.620 [2024-07-24 05:21:16.110371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:01.620 [2024-07-24 05:21:16.110399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:01.620 [2024-07-24 05:21:16.110431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:01.620 [2024-07-24 05:21:16.110441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:01.620 [2024-07-24 05:21:16.110450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:01.620 [2024-07-24 05:21:16.110459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:01.620 [2024-07-24 05:21:16.110468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:01.620 [2024-07-24 05:21:16.110476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:01.620 [2024-07-24 05:21:16.110494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:01.620 [2024-07-24 05:21:16.110504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110513] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:01.620 [2024-07-24 05:21:16.110522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:01.620 [2024-07-24 05:21:16.110532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:01.620 [2024-07-24 05:21:16.110552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:01.620 [2024-07-24 05:21:16.110561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:01.620 [2024-07-24 05:21:16.110570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:01.620 [2024-07-24 05:21:16.110579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:01.620 [2024-07-24 05:21:16.110588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:01.620 [2024-07-24 05:21:16.110597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:01.620 [2024-07-24 05:21:16.110607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:01.620 [2024-07-24 05:21:16.110974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.111058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:01.620 [2024-07-24 05:21:16.111192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:01.620 [2024-07-24 05:21:16.111249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:01.620 [2024-07-24 05:21:16.111374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:01.620 [2024-07-24 05:21:16.111458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:01.620 [2024-07-24 05:21:16.111594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:01.620 [2024-07-24 05:21:16.111652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:01.620 [2024-07-24 05:21:16.111763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:01.620 [2024-07-24 05:21:16.111917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:01.620 [2024-07-24 05:21:16.111982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:01.620 [2024-07-24 05:21:16.112481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:01.620 [2024-07-24 05:21:16.112494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:01.620 [2024-07-24 05:21:16.112516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:01.620 [2024-07-24 05:21:16.112527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:01.620 [2024-07-24 05:21:16.112537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:01.620 [2024-07-24 05:21:16.112548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.112559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:01.620 [2024-07-24 05:21:16.112570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:32:01.620 [2024-07-24 05:21:16.112582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.620 [2024-07-24 05:21:16.149366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.149586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.620 [2024-07-24 05:21:16.149746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.722 ms 00:32:01.620 [2024-07-24 05:21:16.149899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.620 [2024-07-24 05:21:16.150047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.150100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:01.620 [2024-07-24 05:21:16.150201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:01.620 [2024-07-24 05:21:16.150264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.620 [2024-07-24 05:21:16.184890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.184962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.620 [2024-07-24 05:21:16.184981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.453 ms 00:32:01.620 [2024-07-24 05:21:16.184994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.620 [2024-07-24 05:21:16.185063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.185086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.620 [2024-07-24 05:21:16.185100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:01.620 [2024-07-24 05:21:16.185111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.620 [2024-07-24 05:21:16.185272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.620 [2024-07-24 05:21:16.185298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.621 [2024-07-24 05:21:16.185312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:32:01.621 [2024-07-24 05:21:16.185324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.185477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.185504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.621 [2024-07-24 05:21:16.185522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:32:01.621 [2024-07-24 05:21:16.185533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.201549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.201587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.621 [2024-07-24 05:21:16.201622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.988 ms 00:32:01.621 [2024-07-24 05:21:16.201633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.201798] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:01.621 [2024-07-24 05:21:16.201820] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:01.621 [2024-07-24 05:21:16.201832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.201843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:01.621 [2024-07-24 05:21:16.201894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:01.621 [2024-07-24 05:21:16.201908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.214382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.214412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:01.621 [2024-07-24 05:21:16.214442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:32:01.621 [2024-07-24 05:21:16.214452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.214560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.214575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:01.621 [2024-07-24 05:21:16.214586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:01.621 [2024-07-24 05:21:16.214595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.214657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.214673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:01.621 [2024-07-24 05:21:16.214684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:01.621 [2024-07-24 05:21:16.214694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.215466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.215491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:01.621 [2024-07-24 05:21:16.215504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:32:01.621 [2024-07-24 05:21:16.215515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.215537] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:01.621 [2024-07-24 05:21:16.215568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.215583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:01.621 [2024-07-24 05:21:16.215605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:01.621 [2024-07-24 05:21:16.215615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.226970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:01.621 [2024-07-24 05:21:16.227177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.227195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:01.621 [2024-07-24 05:21:16.227207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.537 ms 00:32:01.621 [2024-07-24 05:21:16.227217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.229575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.229607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:01.621 [2024-07-24 05:21:16.229631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:32:01.621 [2024-07-24 05:21:16.229641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.229720] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:01.621 [2024-07-24 05:21:16.230263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.230293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:01.621 [2024-07-24 05:21:16.230307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:32:01.621 [2024-07-24 05:21:16.230318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.230351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.230366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:01.621 [2024-07-24 05:21:16.230383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:01.621 [2024-07-24 05:21:16.230394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.621 [2024-07-24 05:21:16.230430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:01.621 [2024-07-24 05:21:16.230446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.621 [2024-07-24 05:21:16.230456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:01.621 [2024-07-24 05:21:16.230467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:01.621 [2024-07-24 05:21:16.230478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.880 [2024-07-24 05:21:16.258889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.880 [2024-07-24 05:21:16.258935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:01.880 [2024-07-24 05:21:16.258967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.358 ms 00:32:01.880 [2024-07-24 05:21:16.258978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.880 [2024-07-24 05:21:16.259049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.880 [2024-07-24 05:21:16.259066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:01.880 [2024-07-24 05:21:16.259077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:01.880 [2024-07-24 05:21:16.259088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.880 [2024-07-24 05:21:16.268800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.082 ms, result 0 00:32:46.115  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (22 MBps) Copying: 93/1024 [MB] (22 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (24 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 259/1024 [MB] (23 MBps) Copying: 282/1024 [MB] (23 MBps) Copying: 306/1024 [MB] (23 MBps) Copying: 329/1024 [MB] (23 MBps) Copying: 353/1024 [MB] (23 MBps) Copying: 376/1024 [MB] (23 MBps) Copying: 400/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (23 MBps) Copying: 448/1024 [MB] (24 MBps) Copying: 472/1024 [MB] (23 MBps) Copying: 496/1024 [MB] (24 MBps) Copying: 520/1024 [MB] (23 MBps) Copying: 544/1024 [MB] (24 MBps) Copying: 568/1024 [MB] (23 MBps) Copying: 592/1024 [MB] (24 MBps) Copying: 615/1024 [MB] (22 MBps) Copying: 638/1024 [MB] (23 MBps) Copying: 661/1024 [MB] (23 MBps) Copying: 684/1024 [MB] (23 MBps) Copying: 707/1024 [MB] (22 MBps) Copying: 730/1024 [MB] (22 MBps) Copying: 752/1024 [MB] (22 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (21 MBps) Copying: 820/1024 [MB] (22 MBps) Copying: 843/1024 [MB] (22 MBps) Copying: 865/1024 [MB] (22 MBps) Copying: 888/1024 [MB] (23 MBps) Copying: 911/1024 [MB] (23 MBps) Copying: 935/1024 [MB] (23 MBps) Copying: 957/1024 [MB] (22 MBps) Copying: 980/1024 [MB] (22 MBps) Copying: 1003/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 05:22:00.537189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.115 [2024-07-24 05:22:00.537271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:46.115 [2024-07-24 05:22:00.537311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:46.115 [2024-07-24 05:22:00.537323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.115 [2024-07-24 05:22:00.537353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:46.115 [2024-07-24 05:22:00.541696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.115 [2024-07-24 05:22:00.541907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:46.115 [2024-07-24 05:22:00.542056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.320 ms 00:32:46.115 [2024-07-24 05:22:00.542216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.115 [2024-07-24 05:22:00.542523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.115 [2024-07-24 05:22:00.542675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:46.115 [2024-07-24 05:22:00.542804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:32:46.115 [2024-07-24 05:22:00.542876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.115 [2024-07-24 05:22:00.542997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.115 [2024-07-24 05:22:00.543123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:46.115 [2024-07-24 05:22:00.543266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:46.115 [2024-07-24 05:22:00.543321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.115 [2024-07-24 05:22:00.543497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.115 [2024-07-24 05:22:00.543636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:46.115 [2024-07-24 05:22:00.543701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:46.115 [2024-07-24 05:22:00.543787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.115 [2024-07-24 05:22:00.543928] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:46.115 [2024-07-24 05:22:00.543995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:32:46.115 [2024-07-24 05:22:00.544164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.544326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.544495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.544622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.544754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.544954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.545902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.546993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:46.115 [2024-07-24 05:22:00.547063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.547904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.548167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:46.116 [2024-07-24 05:22:00.548269] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:46.116 [2024-07-24 05:22:00.548352] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbfa6140-daca-4402-8cfb-8aeef1de4c65 00:32:46.116 [2024-07-24 05:22:00.548442] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:32:46.116 [2024-07-24 05:22:00.548481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:32:46.116 [2024-07-24 05:22:00.548517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:32:46.116 [2024-07-24 05:22:00.548626] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:32:46.116 [2024-07-24 05:22:00.548676] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:46.116 [2024-07-24 05:22:00.548797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:46.116 [2024-07-24 05:22:00.548863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:46.116 [2024-07-24 05:22:00.548930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:46.116 [2024-07-24 05:22:00.549025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:46.116 [2024-07-24 05:22:00.549074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.116 [2024-07-24 05:22:00.549224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:46.116 [2024-07-24 05:22:00.549274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.148 ms 00:32:46.116 [2024-07-24 05:22:00.549314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.116 [2024-07-24 05:22:00.565327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.116 [2024-07-24 05:22:00.565501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:46.116 [2024-07-24 05:22:00.565545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.821 ms 00:32:46.116 [2024-07-24 05:22:00.565565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.116 [2024-07-24 05:22:00.566055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:46.116 [2024-07-24 05:22:00.566081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:46.116 [2024-07-24 05:22:00.566095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:32:46.116 [2024-07-24 05:22:00.566107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.116 [2024-07-24 05:22:00.600002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.116 [2024-07-24 05:22:00.600226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:46.116 [2024-07-24 05:22:00.600351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.116 [2024-07-24 05:22:00.600402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.116 [2024-07-24 05:22:00.600581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.116 [2024-07-24 05:22:00.600642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:46.116 [2024-07-24 05:22:00.600690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.116 [2024-07-24 05:22:00.600808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.117 [2024-07-24 05:22:00.601087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.117 [2024-07-24 05:22:00.601219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:46.117 [2024-07-24 05:22:00.601358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.117 [2024-07-24 05:22:00.601427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.117 [2024-07-24 05:22:00.601545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.117 [2024-07-24 05:22:00.601656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:46.117 [2024-07-24 05:22:00.601712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.117 [2024-07-24 05:22:00.601798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.117 [2024-07-24 05:22:00.690541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.117 [2024-07-24 05:22:00.690776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:46.117 [2024-07-24 05:22:00.690966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.117 [2024-07-24 05:22:00.691021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.375 [2024-07-24 05:22:00.769276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.769553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:46.376 [2024-07-24 05:22:00.769679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.769730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.769880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.769945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:46.376 [2024-07-24 05:22:00.770046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.770187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.770332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:46.376 [2024-07-24 05:22:00.770385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.770620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.770654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:46.376 [2024-07-24 05:22:00.770669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.770726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.770744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:46.376 [2024-07-24 05:22:00.770756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.770810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.770825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:46.376 [2024-07-24 05:22:00.770836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.770929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.376 [2024-07-24 05:22:00.770946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:46.376 [2024-07-24 05:22:00.770958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.376 [2024-07-24 05:22:00.770969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.376 [2024-07-24 05:22:00.771104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 233.881 ms, result 0 00:32:47.312 00:32:47.312 00:32:47.312 05:22:01 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:49.212 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 86608 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86608 ']' 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86608 00:32:49.212 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86608) - No such process 00:32:49.212 Process with pid 86608 is not found 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 86608 is not found' 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:49.212 Remove shared memory files 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_band_md /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_l2p_l1 /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_l2p_l2 /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_l2p_l2_ctx /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_nvc_md /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_p2l_pool /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_sb /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_sb_shm /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_trim_bitmap /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_trim_log /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_trim_md /dev/hugepages/ftl_cbfa6140-daca-4402-8cfb-8aeef1de4c65_vmap 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:49.212 00:32:49.212 real 3m28.694s 00:32:49.212 user 3m15.726s 00:32:49.212 sys 0m14.528s 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:49.212 05:22:03 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:49.212 ************************************ 00:32:49.212 END TEST ftl_restore_fast 00:32:49.212 ************************************ 00:32:49.470 05:22:03 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:49.470 05:22:03 ftl -- ftl/ftl.sh@14 -- # killprocess 78607 00:32:49.470 05:22:03 ftl -- common/autotest_common.sh@948 -- # '[' -z 78607 ']' 00:32:49.470 05:22:03 ftl -- common/autotest_common.sh@952 -- # kill -0 78607 00:32:49.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (78607) - No such process 00:32:49.471 Process with pid 78607 is not found 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 78607 is not found' 00:32:49.471 05:22:03 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:49.471 05:22:03 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88668 00:32:49.471 05:22:03 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:49.471 05:22:03 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88668 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@829 -- # '[' -z 88668 ']' 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.471 05:22:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:49.471 [2024-07-24 05:22:04.000038] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:49.471 [2024-07-24 05:22:04.000215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88668 ] 00:32:49.729 [2024-07-24 05:22:04.177796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.987 [2024-07-24 05:22:04.406621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.554 05:22:05 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:50.554 05:22:05 ftl -- common/autotest_common.sh@862 -- # return 0 00:32:50.554 05:22:05 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:50.811 nvme0n1 00:32:50.811 05:22:05 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:50.811 05:22:05 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:50.811 05:22:05 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:51.069 05:22:05 ftl -- ftl/common.sh@28 -- # stores=434e22a3-534d-4807-99c5-f5e1b4a84693 00:32:51.069 05:22:05 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:51.069 05:22:05 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 434e22a3-534d-4807-99c5-f5e1b4a84693 00:32:51.327 05:22:05 ftl -- ftl/ftl.sh@23 -- # killprocess 88668 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@948 -- # '[' -z 88668 ']' 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@952 -- # kill -0 88668 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@953 -- # uname 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88668 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:51.327 killing process with pid 88668 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88668' 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@967 -- # kill 88668 00:32:51.327 05:22:05 ftl -- common/autotest_common.sh@972 -- # wait 88668 00:32:53.230 05:22:07 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:53.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:53.488 Waiting for block devices as requested 00:32:53.488 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:53.488 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:53.746 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:53.746 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.022 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:59.022 05:22:13 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:59.022 Remove shared memory files 00:32:59.022 05:22:13 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:59.022 05:22:13 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:59.022 05:22:13 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:59.022 05:22:13 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:59.022 05:22:13 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:59.022 05:22:13 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:59.022 00:32:59.022 real 15m30.965s 00:32:59.022 user 18m12.888s 00:32:59.022 sys 1m39.808s 00:32:59.022 05:22:13 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.022 05:22:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:59.022 ************************************ 00:32:59.022 END TEST ftl 00:32:59.022 ************************************ 00:32:59.022 05:22:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:59.022 05:22:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:59.022 05:22:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:59.022 05:22:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:59.022 05:22:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:59.022 05:22:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:59.022 05:22:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:59.022 05:22:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:59.022 05:22:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:59.022 05:22:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:59.022 05:22:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:59.022 05:22:13 -- common/autotest_common.sh@10 -- # set +x 00:32:59.022 05:22:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:59.022 05:22:13 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:32:59.022 05:22:13 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:32:59.022 05:22:13 -- common/autotest_common.sh@10 -- # set +x 00:33:00.399 INFO: APP EXITING 00:33:00.399 INFO: killing all VMs 00:33:00.399 INFO: killing vhost app 00:33:00.399 INFO: EXIT DONE 00:33:00.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:01.226 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:01.226 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:01.226 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:01.226 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:01.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:02.054 Cleaning 00:33:02.054 Removing: /var/run/dpdk/spdk0/config 00:33:02.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:02.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:02.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:02.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:02.054 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:02.054 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:02.054 Removing: /var/run/dpdk/spdk0 00:33:02.054 Removing: /var/run/dpdk/spdk_pid61822 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62027 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62237 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62341 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62381 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62508 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62526 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62707 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62792 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62880 00:33:02.054 Removing: /var/run/dpdk/spdk_pid62983 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63072 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63117 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63154 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63217 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63322 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63769 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63833 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63898 00:33:02.054 Removing: /var/run/dpdk/spdk_pid63914 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64040 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64056 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64175 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64191 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64255 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64273 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64330 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64346 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64518 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64557 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64632 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64702 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64739 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64811 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64853 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64900 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64941 00:33:02.054 Removing: /var/run/dpdk/spdk_pid64982 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65028 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65075 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65116 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65157 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65204 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65250 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65291 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65338 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65379 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65424 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65466 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65513 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65557 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65601 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65642 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65695 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65772 00:33:02.054 Removing: /var/run/dpdk/spdk_pid65882 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66044 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66132 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66170 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66630 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66728 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66848 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66901 00:33:02.054 Removing: /var/run/dpdk/spdk_pid66931 00:33:02.054 Removing: /var/run/dpdk/spdk_pid67003 00:33:02.054 Removing: /var/run/dpdk/spdk_pid67631 00:33:02.054 Removing: /var/run/dpdk/spdk_pid67673 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68172 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68269 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68376 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68434 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68459 00:33:02.054 Removing: /var/run/dpdk/spdk_pid68490 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70335 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70474 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70488 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70501 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70540 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70544 00:33:02.054 Removing: /var/run/dpdk/spdk_pid70556 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70601 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70605 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70617 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70662 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70666 00:33:02.055 Removing: /var/run/dpdk/spdk_pid70678 00:33:02.314 Removing: /var/run/dpdk/spdk_pid72033 00:33:02.314 Removing: /var/run/dpdk/spdk_pid72129 00:33:02.314 Removing: /var/run/dpdk/spdk_pid73519 00:33:02.314 Removing: /var/run/dpdk/spdk_pid74864 00:33:02.314 Removing: /var/run/dpdk/spdk_pid74979 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75083 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75193 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75325 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75401 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75535 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75903 00:33:02.314 Removing: /var/run/dpdk/spdk_pid75945 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76413 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76593 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76691 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76803 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76849 00:33:02.314 Removing: /var/run/dpdk/spdk_pid76876 00:33:02.314 Removing: /var/run/dpdk/spdk_pid77165 00:33:02.314 Removing: /var/run/dpdk/spdk_pid77221 00:33:02.314 Removing: /var/run/dpdk/spdk_pid77302 00:33:02.314 Removing: /var/run/dpdk/spdk_pid77683 00:33:02.314 Removing: /var/run/dpdk/spdk_pid77825 00:33:02.314 Removing: /var/run/dpdk/spdk_pid78607 00:33:02.314 Removing: /var/run/dpdk/spdk_pid78737 00:33:02.314 Removing: /var/run/dpdk/spdk_pid78919 00:33:02.314 Removing: /var/run/dpdk/spdk_pid79022 00:33:02.314 Removing: /var/run/dpdk/spdk_pid79380 00:33:02.314 Removing: /var/run/dpdk/spdk_pid79651 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80000 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80182 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80335 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80392 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80539 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80574 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80628 00:33:02.314 Removing: /var/run/dpdk/spdk_pid80848 00:33:02.314 Removing: /var/run/dpdk/spdk_pid81071 00:33:02.314 Removing: /var/run/dpdk/spdk_pid81532 00:33:02.314 Removing: /var/run/dpdk/spdk_pid82028 00:33:02.314 Removing: /var/run/dpdk/spdk_pid82495 00:33:02.314 Removing: /var/run/dpdk/spdk_pid83072 00:33:02.314 Removing: /var/run/dpdk/spdk_pid83204 00:33:02.314 Removing: /var/run/dpdk/spdk_pid83298 00:33:02.314 Removing: /var/run/dpdk/spdk_pid84010 00:33:02.314 Removing: /var/run/dpdk/spdk_pid84084 00:33:02.314 Removing: /var/run/dpdk/spdk_pid84564 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85007 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85553 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85671 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85724 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85794 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85851 00:33:02.314 Removing: /var/run/dpdk/spdk_pid85925 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86132 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86193 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86265 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86335 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86369 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86448 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86608 00:33:02.314 Removing: /var/run/dpdk/spdk_pid86825 00:33:02.314 Removing: /var/run/dpdk/spdk_pid87263 00:33:02.314 Removing: /var/run/dpdk/spdk_pid87724 00:33:02.314 Removing: /var/run/dpdk/spdk_pid88183 00:33:02.314 Removing: /var/run/dpdk/spdk_pid88668 00:33:02.314 Clean 00:33:02.314 05:22:16 -- common/autotest_common.sh@1449 -- # return 0 00:33:02.314 05:22:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:02.314 05:22:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:02.314 05:22:16 -- common/autotest_common.sh@10 -- # set +x 00:33:02.573 05:22:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:02.573 05:22:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:02.573 05:22:16 -- common/autotest_common.sh@10 -- # set +x 00:33:02.573 05:22:17 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:02.573 05:22:17 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:02.573 05:22:17 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:02.573 05:22:17 -- spdk/autotest.sh@391 -- # hash lcov 00:33:02.573 05:22:17 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:02.573 05:22:17 -- spdk/autotest.sh@393 -- # hostname 00:33:02.573 05:22:17 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:02.832 geninfo: WARNING: invalid characters removed from testname! 00:33:29.402 05:22:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:29.402 05:22:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.934 05:22:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:34.499 05:22:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:37.032 05:22:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:39.562 05:22:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:42.094 05:22:56 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:42.094 05:22:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:42.094 05:22:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:42.094 05:22:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.094 05:22:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.094 05:22:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.094 05:22:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.094 05:22:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.094 05:22:56 -- paths/export.sh@5 -- $ export PATH 00:33:42.094 05:22:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.094 05:22:56 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:42.094 05:22:56 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:42.094 05:22:56 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721798576.XXXXXX 00:33:42.094 05:22:56 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721798576.07HtGM 00:33:42.094 05:22:56 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:42.094 05:22:56 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:42.094 05:22:56 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:42.094 05:22:56 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:42.094 05:22:56 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:42.094 05:22:56 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:42.094 05:22:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:42.094 05:22:56 -- common/autotest_common.sh@10 -- $ set +x 00:33:42.094 05:22:56 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:42.094 05:22:56 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:42.094 05:22:56 -- pm/common@17 -- $ local monitor 00:33:42.094 05:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:42.094 05:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:42.094 05:22:56 -- pm/common@25 -- $ sleep 1 00:33:42.094 05:22:56 -- pm/common@21 -- $ date +%s 00:33:42.094 05:22:56 -- pm/common@21 -- $ date +%s 00:33:42.094 05:22:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721798576 00:33:42.094 05:22:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721798576 00:33:42.353 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721798576_collect-vmstat.pm.log 00:33:42.353 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721798576_collect-cpu-load.pm.log 00:33:43.290 05:22:57 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:43.290 05:22:57 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:43.290 05:22:57 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:43.290 05:22:57 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:43.290 05:22:57 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:43.290 05:22:57 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:43.290 05:22:57 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:43.290 05:22:57 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:43.290 05:22:57 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:43.290 05:22:57 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:43.290 05:22:57 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:43.290 05:22:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:43.290 05:22:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:43.290 05:22:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:43.290 05:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.290 05:22:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:43.290 05:22:57 -- pm/common@44 -- $ pid=90384 00:33:43.290 05:22:57 -- pm/common@50 -- $ kill -TERM 90384 00:33:43.290 05:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.290 05:22:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:43.290 05:22:57 -- pm/common@44 -- $ pid=90385 00:33:43.290 05:22:57 -- pm/common@50 -- $ kill -TERM 90385 00:33:43.290 + [[ -n 5212 ]] 00:33:43.290 + sudo kill 5212 00:33:43.300 [Pipeline] } 00:33:43.321 [Pipeline] // timeout 00:33:43.326 [Pipeline] } 00:33:43.342 [Pipeline] // stage 00:33:43.347 [Pipeline] } 00:33:43.364 [Pipeline] // catchError 00:33:43.374 [Pipeline] stage 00:33:43.376 [Pipeline] { (Stop VM) 00:33:43.391 [Pipeline] sh 00:33:43.681 + vagrant halt 00:33:46.985 ==> default: Halting domain... 00:33:53.559 [Pipeline] sh 00:33:53.840 + vagrant destroy -f 00:33:57.133 ==> default: Removing domain... 00:33:57.144 [Pipeline] sh 00:33:57.422 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:33:57.430 [Pipeline] } 00:33:57.447 [Pipeline] // stage 00:33:57.452 [Pipeline] } 00:33:57.467 [Pipeline] // dir 00:33:57.472 [Pipeline] } 00:33:57.487 [Pipeline] // wrap 00:33:57.493 [Pipeline] } 00:33:57.507 [Pipeline] // catchError 00:33:57.515 [Pipeline] stage 00:33:57.516 [Pipeline] { (Epilogue) 00:33:57.529 [Pipeline] sh 00:33:57.810 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:03.091 [Pipeline] catchError 00:34:03.094 [Pipeline] { 00:34:03.108 [Pipeline] sh 00:34:03.390 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:03.390 Artifacts sizes are good 00:34:03.417 [Pipeline] } 00:34:03.433 [Pipeline] // catchError 00:34:03.445 [Pipeline] archiveArtifacts 00:34:03.457 Archiving artifacts 00:34:03.601 [Pipeline] cleanWs 00:34:03.612 [WS-CLEANUP] Deleting project workspace... 00:34:03.613 [WS-CLEANUP] Deferred wipeout is used... 00:34:03.619 [WS-CLEANUP] done 00:34:03.621 [Pipeline] } 00:34:03.638 [Pipeline] // stage 00:34:03.644 [Pipeline] } 00:34:03.663 [Pipeline] // node 00:34:03.669 [Pipeline] End of Pipeline 00:34:03.715 Finished: SUCCESS